Jan 26 12:35:26 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 12:35:26 crc restorecon[4696]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:27 crc restorecon[4696]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:35:27 crc restorecon[4696]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 12:35:27 crc kubenswrapper[4881]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 12:35:27 crc kubenswrapper[4881]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 12:35:27 crc kubenswrapper[4881]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 12:35:27 crc kubenswrapper[4881]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 12:35:27 crc kubenswrapper[4881]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 12:35:27 crc kubenswrapper[4881]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.882582 4881 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889605 4881 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889656 4881 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889665 4881 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889672 4881 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889682 4881 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889690 4881 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889696 4881 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889701 4881 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889708 4881 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889714 4881 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889721 4881 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889727 4881 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889733 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889739 4881 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889746 4881 feature_gate.go:330] unrecognized feature gate: Example Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889753 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889758 4881 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889765 4881 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889771 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889776 4881 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889782 4881 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889790 4881 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889799 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889805 4881 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889811 4881 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889816 4881 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889823 4881 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889828 4881 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889833 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889840 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889845 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889851 4881 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889856 4881 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889873 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889879 4881 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889885 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889891 4881 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889898 4881 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889905 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889910 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889915 4881 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889929 4881 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889934 4881 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889940 4881 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889955 4881 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889961 4881 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889973 4881 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889978 4881 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889990 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.889996 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890006 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890012 4881 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890017 4881 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890022 4881 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890027 4881 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890032 4881 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890039 4881 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890044 4881 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890050 4881 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890055 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890062 4881 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890067 4881 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890072 4881 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890080 4881 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890086 4881 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890096 4881 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890104 4881 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890110 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890116 4881 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890122 4881 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.890127 4881 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890275 4881 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890293 4881 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890308 4881 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890317 4881 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890326 4881 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890334 4881 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890344 4881 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890353 4881 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890359 4881 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890365 4881 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890372 4881 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890381 4881 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890387 4881 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890394 4881 flags.go:64] FLAG: --cgroup-root="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890400 4881 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890406 4881 flags.go:64] FLAG: --client-ca-file="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890412 4881 flags.go:64] FLAG: --cloud-config="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890418 4881 flags.go:64] FLAG: --cloud-provider="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890424 4881 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890434 4881 flags.go:64] FLAG: --cluster-domain="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890441 4881 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890447 4881 flags.go:64] FLAG: --config-dir="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890454 4881 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890461 4881 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890470 4881 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890476 4881 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890482 4881 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890488 4881 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890495 4881 flags.go:64] FLAG: --contention-profiling="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890500 4881 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890506 4881 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890513 4881 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890537 4881 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890552 4881 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890558 4881 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890564 4881 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890570 4881 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890576 4881 flags.go:64] FLAG: --enable-server="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890582 4881 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890593 4881 flags.go:64] FLAG: --event-burst="100" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890599 4881 flags.go:64] FLAG: --event-qps="50" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890605 4881 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890611 4881 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890619 4881 flags.go:64] FLAG: --eviction-hard="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890628 4881 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890634 4881 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890639 4881 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890648 4881 flags.go:64] FLAG: --eviction-soft="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890654 4881 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890660 4881 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890666 4881 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890671 4881 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890676 4881 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890682 4881 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890688 4881 flags.go:64] FLAG: --feature-gates="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890696 4881 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890702 4881 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890709 4881 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890715 4881 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890721 4881 flags.go:64] FLAG: --healthz-port="10248" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890727 4881 flags.go:64] FLAG: --help="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.890733 4881 flags.go:64] FLAG: --hostname-override="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891009 4881 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891021 4881 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891028 4881 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891034 4881 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891040 4881 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891047 4881 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891052 4881 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891060 4881 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891066 4881 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891072 4881 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891079 4881 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891086 4881 flags.go:64] FLAG: --kube-reserved="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891092 4881 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891099 4881 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891106 4881 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891112 4881 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891119 4881 flags.go:64] FLAG: --lock-file="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891124 4881 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891131 4881 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891138 4881 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891157 4881 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891165 4881 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891171 4881 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891177 4881 flags.go:64] FLAG: --logging-format="text" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891183 4881 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891190 4881 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891195 4881 flags.go:64] FLAG: --manifest-url="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891201 4881 flags.go:64] FLAG: --manifest-url-header="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891211 4881 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891218 4881 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891226 4881 flags.go:64] FLAG: --max-pods="110" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891232 4881 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891238 4881 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891244 4881 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891249 4881 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891255 4881 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891260 4881 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891266 4881 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891287 4881 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891294 4881 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891300 4881 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891306 4881 flags.go:64] FLAG: --pod-cidr="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891312 4881 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891323 4881 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891329 4881 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891335 4881 flags.go:64] FLAG: --pods-per-core="0" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891340 4881 flags.go:64] FLAG: --port="10250" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891347 4881 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891353 4881 flags.go:64] FLAG: --provider-id="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891359 4881 flags.go:64] FLAG: --qos-reserved="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891364 4881 flags.go:64] FLAG: --read-only-port="10255" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891370 4881 flags.go:64] FLAG: --register-node="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891377 4881 flags.go:64] FLAG: --register-schedulable="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891383 4881 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891396 4881 flags.go:64] FLAG: --registry-burst="10" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891402 4881 flags.go:64] FLAG: --registry-qps="5" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891407 4881 flags.go:64] FLAG: --reserved-cpus="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891415 4881 flags.go:64] FLAG: --reserved-memory="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891424 4881 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891430 4881 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891437 4881 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891442 4881 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891448 4881 flags.go:64] FLAG: --runonce="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891454 4881 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891460 4881 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891466 4881 flags.go:64] FLAG: --seccomp-default="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891472 4881 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891478 4881 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891484 4881 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891495 4881 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891501 4881 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891508 4881 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891535 4881 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891543 4881 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891548 4881 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891555 4881 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891561 4881 flags.go:64] FLAG: --system-cgroups="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891568 4881 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891579 4881 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891585 4881 flags.go:64] FLAG: --tls-cert-file="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891592 4881 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891599 4881 flags.go:64] FLAG: --tls-min-version="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891605 4881 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891611 4881 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891617 4881 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891623 4881 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891630 4881 flags.go:64] FLAG: --v="2" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891639 4881 flags.go:64] FLAG: --version="false" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891649 4881 flags.go:64] FLAG: --vmodule="" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891657 4881 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.891664 4881 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891853 4881 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891862 4881 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891870 4881 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891875 4881 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891880 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891885 4881 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891890 4881 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891895 4881 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891899 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891904 4881 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891912 4881 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891916 4881 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891921 4881 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891926 4881 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891931 4881 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891938 4881 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891945 4881 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891951 4881 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891960 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891966 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891972 4881 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891977 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891982 4881 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891987 4881 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891993 4881 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.891997 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892004 4881 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892011 4881 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892017 4881 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892022 4881 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892028 4881 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892033 4881 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892039 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892044 4881 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892050 4881 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892056 4881 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892061 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892066 4881 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892072 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892077 4881 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892082 4881 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892087 4881 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892098 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892103 4881 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892108 4881 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892114 4881 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892119 4881 feature_gate.go:330] unrecognized feature gate: Example Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892124 4881 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892129 4881 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892134 4881 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892142 4881 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892147 4881 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892153 4881 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892159 4881 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892164 4881 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892169 4881 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892175 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892180 4881 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892185 4881 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892190 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892195 4881 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892200 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892206 4881 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892212 4881 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892219 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892224 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892230 4881 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892236 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892241 4881 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892246 4881 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.892251 4881 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.892261 4881 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.903485 4881 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.903544 4881 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903685 4881 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903700 4881 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903713 4881 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903724 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903733 4881 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903741 4881 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903750 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903758 4881 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903769 4881 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903779 4881 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903788 4881 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903797 4881 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903806 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903816 4881 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903826 4881 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903834 4881 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903843 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903850 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903859 4881 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903867 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903874 4881 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903882 4881 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903889 4881 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903897 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903905 4881 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903912 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903920 4881 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903928 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903936 4881 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903948 4881 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903958 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903967 4881 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903977 4881 feature_gate.go:330] unrecognized feature gate: Example Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903986 4881 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.903995 4881 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904003 4881 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904012 4881 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904020 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904029 4881 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904037 4881 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904045 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904053 4881 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904061 4881 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904069 4881 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904079 4881 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904087 4881 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904095 4881 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904102 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904110 4881 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904117 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904125 4881 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904132 4881 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904140 4881 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904148 4881 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904155 4881 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904163 4881 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904170 4881 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904178 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904186 4881 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904193 4881 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904201 4881 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904209 4881 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904218 4881 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904225 4881 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904233 4881 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904241 4881 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904249 4881 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904256 4881 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904265 4881 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904273 4881 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904281 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.904294 4881 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904505 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904539 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904548 4881 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904557 4881 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904565 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904572 4881 feature_gate.go:330] unrecognized feature gate: Example Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904580 4881 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904587 4881 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904596 4881 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904603 4881 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904611 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904618 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904626 4881 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904634 4881 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904641 4881 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904649 4881 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904657 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904665 4881 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904673 4881 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904681 4881 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904688 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904696 4881 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904704 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904712 4881 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904720 4881 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904728 4881 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904736 4881 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904743 4881 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904751 4881 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904760 4881 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904767 4881 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904776 4881 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904783 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904794 4881 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904804 4881 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904813 4881 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904822 4881 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904830 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904838 4881 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904846 4881 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904857 4881 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904867 4881 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904875 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904883 4881 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904892 4881 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904903 4881 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904912 4881 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904921 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904929 4881 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904936 4881 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.904944 4881 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905197 4881 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905215 4881 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905225 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905234 4881 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905246 4881 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905256 4881 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905266 4881 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905275 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905312 4881 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905322 4881 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905332 4881 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905342 4881 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905352 4881 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905363 4881 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905376 4881 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905387 4881 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905401 4881 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905414 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905424 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 12:35:27 crc kubenswrapper[4881]: W0126 12:35:27.905434 4881 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.905449 4881 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.905955 4881 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.910102 4881 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.910228 4881 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.911049 4881 server.go:997] "Starting client certificate rotation" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.911084 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.911275 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-08 18:14:17.590003141 +0000 UTC Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.911400 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.918681 4881 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 12:35:27 crc kubenswrapper[4881]: E0126 12:35:27.920239 4881 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.921333 4881 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.932151 4881 log.go:25] "Validated CRI v1 runtime API" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.963586 4881 log.go:25] "Validated CRI v1 image API" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.965986 4881 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.969821 4881 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-12-29-38-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.969876 4881 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.997755 4881 manager.go:217] Machine: {Timestamp:2026-01-26 12:35:27.995099258 +0000 UTC m=+0.474409364 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f BootID:69f11506-d189-4146-9efa-f9280470e789 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b4:35:40 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b4:35:40 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:43:0d:d8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1a:83:6f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a7:57:4a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c4:f2:9e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0e:46:09:3e:af:d9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:de:2d:9b:70:37:93 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.998131 4881 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.998319 4881 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.999301 4881 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 12:35:27 crc kubenswrapper[4881]: I0126 12:35:27.999717 4881 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:27.999772 4881 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.000176 4881 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.000197 4881 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.000503 4881 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.000574 4881 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.001052 4881 state_mem.go:36] "Initialized new in-memory state store" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.001196 4881 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.002127 4881 kubelet.go:418] "Attempting to sync node with API server" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.002203 4881 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.002229 4881 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.002251 4881 kubelet.go:324] "Adding apiserver pod source" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.002270 4881 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.004890 4881 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.005467 4881 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.006310 4881 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 12:35:28 crc kubenswrapper[4881]: W0126 12:35:28.006905 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.007012 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:35:28 crc kubenswrapper[4881]: W0126 12:35:28.006911 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007073 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.007077 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007104 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007120 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007134 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007158 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007172 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007188 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007210 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007227 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007241 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007261 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007277 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.007552 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.008195 4881 server.go:1280] "Started kubelet" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.009208 4881 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.009184 4881 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.010187 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.010228 4881 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.010341 4881 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 12:35:28 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.010431 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:56:38.759337519 +0000 UTC Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.010587 4881 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.010765 4881 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.010791 4881 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.010934 4881 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.011172 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Jan 26 12:35:28 crc kubenswrapper[4881]: W0126 12:35:28.011510 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.011585 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.011799 4881 factory.go:55] Registering systemd factory Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.011822 4881 factory.go:221] Registration of the systemd container factory successfully Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.012631 4881 server.go:460] "Adding debug handlers to kubelet server" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.011225 4881 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.016561 4881 factory.go:153] Registering CRI-O factory Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.016594 4881 factory.go:221] Registration of the crio container factory successfully Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.016688 4881 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.016719 4881 factory.go:103] Registering Raw factory Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.016745 4881 manager.go:1196] Started watching for new ooms in manager Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.016247 4881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e480dde840d7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 12:35:28.008154493 +0000 UTC m=+0.487464559,LastTimestamp:2026-01-26 12:35:28.008154493 +0000 UTC m=+0.487464559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.018737 4881 manager.go:319] Starting recovery of all containers Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026299 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026409 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026425 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026442 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026453 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026464 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026475 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026485 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026498 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026533 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026546 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026557 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026569 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026584 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026596 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026609 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026622 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026635 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026649 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026661 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026672 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026715 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026752 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026789 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026802 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026814 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026828 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026841 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026853 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026865 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026876 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026916 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026928 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026940 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026950 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026961 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026973 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.026987 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027000 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027012 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027067 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027079 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027092 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027105 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027118 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027129 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027141 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027155 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027167 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027179 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027190 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027203 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027219 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027231 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027247 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027260 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027273 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027285 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027296 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027308 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027320 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027332 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027344 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027355 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027368 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027380 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027391 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027403 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027414 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027427 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027439 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027450 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027462 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027472 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027485 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027496 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027508 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027537 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027549 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027561 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027575 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027586 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027598 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027610 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027622 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027634 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027649 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027662 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027674 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027686 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027697 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027709 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027721 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027734 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027747 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027760 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027776 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027786 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027797 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027809 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027819 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027830 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027842 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027852 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027867 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027877 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027888 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027898 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027909 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027919 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027932 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027944 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027956 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027968 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027979 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.027990 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028002 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028014 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028025 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028035 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028046 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028056 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028066 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028078 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028089 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028099 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028109 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028120 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028138 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028150 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028164 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028176 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028187 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028200 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028213 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028228 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028240 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028252 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028266 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028278 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028289 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028301 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028313 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028325 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028346 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028359 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028371 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028385 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028940 4881 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028964 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028980 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.028997 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029009 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029023 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029035 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029049 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029062 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029075 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029097 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029111 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029128 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029143 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029156 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029170 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029183 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029196 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029208 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029222 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029235 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029247 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029262 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029275 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029289 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029302 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029314 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029326 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029340 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029352 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029365 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029379 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029392 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029405 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029437 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029450 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029462 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029477 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029489 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029502 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029529 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029542 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029554 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029568 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029582 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029594 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029608 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029618 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029633 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029645 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029657 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029668 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029679 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029689 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029701 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029711 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029723 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029736 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029798 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029809 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029821 4881 reconstruct.go:97] "Volume reconstruction finished" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.029828 4881 reconciler.go:26] "Reconciler: start to sync state" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.047367 4881 manager.go:324] Recovery completed Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.062904 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.067781 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.068136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.068153 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.071754 4881 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.071778 4881 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.071913 4881 state_mem.go:36] "Initialized new in-memory state store" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.076026 4881 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.081117 4881 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.081195 4881 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.081234 4881 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.081572 4881 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 12:35:28 crc kubenswrapper[4881]: W0126 12:35:28.090865 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.090961 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.103650 4881 policy_none.go:49] "None policy: Start" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.105177 4881 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.105216 4881 state_mem.go:35] "Initializing new in-memory state store" Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.111689 4881 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.178270 4881 manager.go:334] "Starting Device Plugin manager" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.178378 4881 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.178401 4881 server.go:79] "Starting device plugin registration server" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.179128 4881 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.179202 4881 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.179387 4881 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.179642 4881 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.179668 4881 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.182831 4881 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.182987 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.185510 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.185626 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.185646 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.185855 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.186139 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.186204 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.187045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.187109 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.187127 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.187294 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.187334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.187298 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.187427 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.187466 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.187353 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.188410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.188450 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.188467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.188643 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.188684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.188714 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.188727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.188921 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.188975 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.189652 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.189689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.189704 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.189711 4881 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.189867 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.190026 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.190082 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.190269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.190315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.190335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.190718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.190776 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.190796 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.190968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.190990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.191003 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.191084 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.191130 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.193399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.193450 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.193467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.212234 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232165 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232235 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232272 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232304 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232334 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232389 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232511 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232630 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232674 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232748 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232831 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.232976 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.233036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.233084 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.280362 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.281372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.281419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.281438 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.281472 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.282107 4881 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.334761 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.334823 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.334858 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.334894 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.334925 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.334953 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.334983 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335005 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335066 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335013 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335127 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335136 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335158 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335170 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335187 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335218 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335255 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335260 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.334953 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335297 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335316 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335326 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.334985 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335379 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335426 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335498 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335591 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335642 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.335685 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.482731 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.484269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.484320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.484343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.484400 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.484986 4881 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.528029 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.552592 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.565846 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: W0126 12:35:28.571799 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4a152f5376da27a90cc44660e8407cb1a16c3182fc0963c145783150f1a9540e WatchSource:0}: Error finding container 4a152f5376da27a90cc44660e8407cb1a16c3182fc0963c145783150f1a9540e: Status 404 returned error can't find the container with id 4a152f5376da27a90cc44660e8407cb1a16c3182fc0963c145783150f1a9540e Jan 26 12:35:28 crc kubenswrapper[4881]: W0126 12:35:28.582252 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7c4ada265b73e76d76d8e9b8ce83b787ae4ebee126e1fb1878dc8b3a12547cc0 WatchSource:0}: Error finding container 7c4ada265b73e76d76d8e9b8ce83b787ae4ebee126e1fb1878dc8b3a12547cc0: Status 404 returned error can't find the container with id 7c4ada265b73e76d76d8e9b8ce83b787ae4ebee126e1fb1878dc8b3a12547cc0 Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.590087 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: W0126 12:35:28.594205 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-e4ac85a6e712ad7029bdfb45624093063f2726bdaf0e11a5f6b4003dda971c40 WatchSource:0}: Error finding container e4ac85a6e712ad7029bdfb45624093063f2726bdaf0e11a5f6b4003dda971c40: Status 404 returned error can't find the container with id e4ac85a6e712ad7029bdfb45624093063f2726bdaf0e11a5f6b4003dda971c40 Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.601493 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.613447 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Jan 26 12:35:28 crc kubenswrapper[4881]: W0126 12:35:28.619011 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0c2e80ad0a1bfe2adcaf48742fe1db6da282d3f3b1d04e318f43825538f69a3a WatchSource:0}: Error finding container 0c2e80ad0a1bfe2adcaf48742fe1db6da282d3f3b1d04e318f43825538f69a3a: Status 404 returned error can't find the container with id 0c2e80ad0a1bfe2adcaf48742fe1db6da282d3f3b1d04e318f43825538f69a3a Jan 26 12:35:28 crc kubenswrapper[4881]: W0126 12:35:28.636647 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a59b27e1d24849bb4fd1d5253389bc238b677c882816f9c07d204d8f8dc40b31 WatchSource:0}: Error finding container a59b27e1d24849bb4fd1d5253389bc238b677c882816f9c07d204d8f8dc40b31: Status 404 returned error can't find the container with id a59b27e1d24849bb4fd1d5253389bc238b677c882816f9c07d204d8f8dc40b31 Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.885430 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.887009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.887056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.887069 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:28 crc kubenswrapper[4881]: I0126 12:35:28.887099 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:35:28 crc kubenswrapper[4881]: E0126 12:35:28.887603 4881 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.011416 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:10:35.64964614 +0000 UTC Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.011625 4881 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.086800 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647" exitCode=0 Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.086882 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647"} Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.086989 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a59b27e1d24849bb4fd1d5253389bc238b677c882816f9c07d204d8f8dc40b31"} Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.087111 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.088271 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.088329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.088346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.089271 4881 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138" exitCode=0 Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.089338 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138"} Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.089359 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0c2e80ad0a1bfe2adcaf48742fe1db6da282d3f3b1d04e318f43825538f69a3a"} Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.089457 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.090233 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.090266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.090283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.090487 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.091604 4881 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="24e39efbdedbf09b1bc98060e900ec5613a13e0f3407f3419ba084cad4cd4a6b" exitCode=0 Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.091655 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"24e39efbdedbf09b1bc98060e900ec5613a13e0f3407f3419ba084cad4cd4a6b"} Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.091706 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e4ac85a6e712ad7029bdfb45624093063f2726bdaf0e11a5f6b4003dda971c40"} Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.091790 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.091959 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.091993 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.092007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.092658 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.092709 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.092721 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.093392 4881 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44" exitCode=0 Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.093448 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44"} Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.093475 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7c4ada265b73e76d76d8e9b8ce83b787ae4ebee126e1fb1878dc8b3a12547cc0"} Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.093608 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.094822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.094851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.094867 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.095117 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1"} Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.095147 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4a152f5376da27a90cc44660e8407cb1a16c3182fc0963c145783150f1a9540e"} Jan 26 12:35:29 crc kubenswrapper[4881]: W0126 12:35:29.174366 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 12:35:29 crc kubenswrapper[4881]: E0126 12:35:29.174459 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:35:29 crc kubenswrapper[4881]: W0126 12:35:29.270053 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 12:35:29 crc kubenswrapper[4881]: E0126 12:35:29.270210 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:35:29 crc kubenswrapper[4881]: W0126 12:35:29.340061 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 12:35:29 crc kubenswrapper[4881]: E0126 12:35:29.340144 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:35:29 crc kubenswrapper[4881]: E0126 12:35:29.414273 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Jan 26 12:35:29 crc kubenswrapper[4881]: W0126 12:35:29.428258 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Jan 26 12:35:29 crc kubenswrapper[4881]: E0126 12:35:29.428398 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.688950 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.690489 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.690613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.690632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:29 crc kubenswrapper[4881]: I0126 12:35:29.690662 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:35:29 crc kubenswrapper[4881]: E0126 12:35:29.692724 4881 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.012012 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:58:51.66232328 +0000 UTC Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.092638 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.100905 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.100957 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.100975 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.100986 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.102228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.102306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.102322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.106389 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.106441 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.106456 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.106468 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.108114 4881 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c" exitCode=0 Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.108166 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.108252 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.108927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.108950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.108958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.111147 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4ad306327e955bc3989c95cfa3a4eeb82bacf4c96ea091a6c1308e175493ff29"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.111345 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.112372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.112397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.112405 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.115452 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4fa21f92dbc9916c1d8eeb219803f791ad323f1cf82e7cb6fa64fe467b585c5f"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.115481 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"46a599fa1a831e0d46ab1af51dc83b3c5ef566caa4d05629e344102fc4d852af"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.115491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9d3b1592342c8b506c26109958ea6b4770bc41e6dde0be57603730427216aec7"} Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.115571 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.116216 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.116248 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:30 crc kubenswrapper[4881]: I0126 12:35:30.116258 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.012768 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:29:41.131340229 +0000 UTC Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.122868 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154"} Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.123049 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.124308 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.124358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.124375 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.127255 4881 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644" exitCode=0 Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.127385 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.127385 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644"} Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.127609 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.128874 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.128922 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.128939 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.129887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.129958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.129977 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.293440 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.295044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.295101 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.295123 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.295160 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.412058 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.412254 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.413335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.413409 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:31 crc kubenswrapper[4881]: I0126 12:35:31.413428 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.013391 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:50:34.639717194 +0000 UTC Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.082986 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.135214 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a"} Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.135277 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e"} Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.135301 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8"} Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.135282 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.135386 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.135399 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.137020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.137052 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.137076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.137088 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.137109 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:32 crc kubenswrapper[4881]: I0126 12:35:32.137094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.014230 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:37:26.560092493 +0000 UTC Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.146131 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f"} Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.146195 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24"} Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.146357 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.147587 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.147635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.147652 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.405204 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.405401 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.405456 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.407206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.407266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.407290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.604757 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.604944 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.606716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.606748 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.606758 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:33 crc kubenswrapper[4881]: I0126 12:35:33.610674 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.015461 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:33:11.192515617 +0000 UTC Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.148337 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.148485 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.149887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.149932 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.149949 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.150196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.150244 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.150261 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.216071 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.559339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.559646 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.561000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.561047 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:34 crc kubenswrapper[4881]: I0126 12:35:34.561063 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.016276 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:58:48.091022036 +0000 UTC Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.151126 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.151997 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.152045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.152059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.545962 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.546227 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.547758 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.547811 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.547829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:35 crc kubenswrapper[4881]: I0126 12:35:35.979051 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:36 crc kubenswrapper[4881]: I0126 12:35:36.016365 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:27:25.779285442 +0000 UTC Jan 26 12:35:36 crc kubenswrapper[4881]: I0126 12:35:36.153795 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:36 crc kubenswrapper[4881]: I0126 12:35:36.155204 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:36 crc kubenswrapper[4881]: I0126 12:35:36.155262 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:36 crc kubenswrapper[4881]: I0126 12:35:36.155283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:37 crc kubenswrapper[4881]: I0126 12:35:37.017156 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:41:16.947776024 +0000 UTC Jan 26 12:35:37 crc kubenswrapper[4881]: I0126 12:35:37.354735 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 12:35:37 crc kubenswrapper[4881]: I0126 12:35:37.355012 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:37 crc kubenswrapper[4881]: I0126 12:35:37.356216 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:37 crc kubenswrapper[4881]: I0126 12:35:37.356282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:37 crc kubenswrapper[4881]: I0126 12:35:37.356308 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:38 crc kubenswrapper[4881]: I0126 12:35:38.018057 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:05:39.164914531 +0000 UTC Jan 26 12:35:38 crc kubenswrapper[4881]: E0126 12:35:38.189851 4881 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 12:35:38 crc kubenswrapper[4881]: I0126 12:35:38.979762 4881 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 12:35:38 crc kubenswrapper[4881]: I0126 12:35:38.979864 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 12:35:39 crc kubenswrapper[4881]: I0126 12:35:39.018565 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:34:12.803627353 +0000 UTC Jan 26 12:35:39 crc kubenswrapper[4881]: I0126 12:35:39.186379 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 12:35:39 crc kubenswrapper[4881]: I0126 12:35:39.186649 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:39 crc kubenswrapper[4881]: I0126 12:35:39.187910 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:39 crc kubenswrapper[4881]: I0126 12:35:39.187960 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:39 crc kubenswrapper[4881]: I0126 12:35:39.187974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:40 crc kubenswrapper[4881]: I0126 12:35:40.011955 4881 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 12:35:40 crc kubenswrapper[4881]: I0126 12:35:40.019026 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:09:36.316106754 +0000 UTC Jan 26 12:35:40 crc kubenswrapper[4881]: E0126 12:35:40.094127 4881 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 12:35:40 crc kubenswrapper[4881]: I0126 12:35:40.711064 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 12:35:40 crc kubenswrapper[4881]: I0126 12:35:40.711142 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 12:35:40 crc kubenswrapper[4881]: I0126 12:35:40.715801 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 12:35:40 crc kubenswrapper[4881]: I0126 12:35:40.715878 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 12:35:41 crc kubenswrapper[4881]: I0126 12:35:41.019435 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:29:12.658979734 +0000 UTC Jan 26 12:35:42 crc kubenswrapper[4881]: I0126 12:35:42.019980 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:31:53.069214615 +0000 UTC Jan 26 12:35:42 crc kubenswrapper[4881]: I0126 12:35:42.088731 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:42 crc kubenswrapper[4881]: I0126 12:35:42.088935 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:42 crc kubenswrapper[4881]: I0126 12:35:42.090411 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:42 crc kubenswrapper[4881]: I0126 12:35:42.090459 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:42 crc kubenswrapper[4881]: I0126 12:35:42.090479 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:43 crc kubenswrapper[4881]: I0126 12:35:43.020318 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:58:14.146016711 +0000 UTC Jan 26 12:35:43 crc kubenswrapper[4881]: I0126 12:35:43.414556 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:43 crc kubenswrapper[4881]: I0126 12:35:43.414731 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:43 crc kubenswrapper[4881]: I0126 12:35:43.416154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:43 crc kubenswrapper[4881]: I0126 12:35:43.416212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:43 crc kubenswrapper[4881]: I0126 12:35:43.416235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:43 crc kubenswrapper[4881]: I0126 12:35:43.422355 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:44 crc kubenswrapper[4881]: I0126 12:35:44.020602 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:35:24.875741573 +0000 UTC Jan 26 12:35:44 crc kubenswrapper[4881]: I0126 12:35:44.164771 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 12:35:44 crc kubenswrapper[4881]: I0126 12:35:44.177795 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:44 crc kubenswrapper[4881]: I0126 12:35:44.178513 4881 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 12:35:44 crc kubenswrapper[4881]: I0126 12:35:44.178956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:44 crc kubenswrapper[4881]: I0126 12:35:44.179011 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:44 crc kubenswrapper[4881]: I0126 12:35:44.179036 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.020799 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:13:13.480083183 +0000 UTC Jan 26 12:35:45 crc kubenswrapper[4881]: E0126 12:35:45.698205 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.701784 4881 trace.go:236] Trace[751058540]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 12:35:32.485) (total time: 13216ms): Jan 26 12:35:45 crc kubenswrapper[4881]: Trace[751058540]: ---"Objects listed" error: 13216ms (12:35:45.701) Jan 26 12:35:45 crc kubenswrapper[4881]: Trace[751058540]: [13.216255268s] [13.216255268s] END Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.701835 4881 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.702234 4881 trace.go:236] Trace[253634846]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 12:35:31.125) (total time: 14576ms): Jan 26 12:35:45 crc kubenswrapper[4881]: Trace[253634846]: ---"Objects listed" error: 14576ms (12:35:45.702) Jan 26 12:35:45 crc kubenswrapper[4881]: Trace[253634846]: [14.576530864s] [14.576530864s] END Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.702271 4881 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.702318 4881 trace.go:236] Trace[572420294]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 12:35:32.044) (total time: 13657ms): Jan 26 12:35:45 crc kubenswrapper[4881]: Trace[572420294]: ---"Objects listed" error: 13657ms (12:35:45.702) Jan 26 12:35:45 crc kubenswrapper[4881]: Trace[572420294]: [13.657588367s] [13.657588367s] END Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.702348 4881 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.703202 4881 trace.go:236] Trace[1517705822]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 12:35:31.273) (total time: 14429ms): Jan 26 12:35:45 crc kubenswrapper[4881]: Trace[1517705822]: ---"Objects listed" error: 14429ms (12:35:45.702) Jan 26 12:35:45 crc kubenswrapper[4881]: Trace[1517705822]: [14.429353962s] [14.429353962s] END Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.703245 4881 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.703383 4881 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 12:35:45 crc kubenswrapper[4881]: E0126 12:35:45.705790 4881 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.986409 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:45 crc kubenswrapper[4881]: I0126 12:35:45.996226 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.013963 4881 apiserver.go:52] "Watching apiserver" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.016400 4881 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.016900 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.017983 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.018323 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.018423 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.018547 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.018590 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.018639 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.018669 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.018724 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.019252 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.021106 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:50:28.748279877 +0000 UTC Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.023722 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.023912 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.024121 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.024124 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.024270 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.024457 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.024788 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.024802 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.024892 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.049763 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.062989 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.075649 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.090965 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.103809 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.111610 4881 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.118695 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.130702 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.142219 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.153904 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.189608 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.201496 4881 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.205788 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.205843 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.205873 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.205899 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.205921 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.205944 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.205966 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.205987 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206011 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206036 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206058 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206082 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206103 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206129 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206151 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206173 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206412 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206503 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206546 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206687 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206770 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206818 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206873 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206927 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206960 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206991 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207021 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207053 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207086 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207131 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207168 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207233 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207269 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207304 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207340 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207375 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207408 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207442 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207474 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207546 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207585 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207626 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207657 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207731 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207775 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207808 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207844 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207879 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207911 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206885 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207946 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.206997 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207039 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207205 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207226 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207981 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207967 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208052 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208069 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208066 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208093 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207499 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207682 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208126 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208139 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208268 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208312 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208344 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208369 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208393 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208418 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208460 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208485 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208506 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208544 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208569 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208592 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208616 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208637 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208656 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208679 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208702 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208727 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208748 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208770 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208795 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208818 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207675 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209039 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207711 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207855 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207930 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207911 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.207380 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208225 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208329 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208363 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208379 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208394 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208533 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208591 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208648 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208760 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208816 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.208843 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:35:46.7088212 +0000 UTC m=+19.188131226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209371 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209419 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209454 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209492 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209552 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209580 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209587 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209622 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209661 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209698 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209730 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209764 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209799 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209833 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209868 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209902 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209935 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210004 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210038 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210071 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210106 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210139 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210176 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210208 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210236 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210264 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210294 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210326 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210359 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210395 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210429 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210460 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210492 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210547 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210567 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210584 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210622 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210654 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210686 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210715 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210749 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210793 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210828 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210861 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210904 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210935 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210967 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210998 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211028 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211061 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211092 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211121 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211154 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211188 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211218 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211251 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211287 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211321 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211355 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211388 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211422 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211457 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211492 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211557 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211591 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211623 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211657 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211680 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211704 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211728 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211751 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211775 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211800 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211824 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211849 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211873 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211898 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211922 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211947 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211972 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211997 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212054 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212078 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212104 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212128 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212154 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212178 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212233 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212260 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212285 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212310 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212336 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212361 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212388 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212419 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212446 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212471 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.212495 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210579 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.225186 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.225284 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.210935 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209022 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209039 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209124 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.225444 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.225485 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209276 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.209280 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.211539 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.225697 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.213144 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.208836 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.213560 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.213747 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.213845 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.214117 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.214134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.214169 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.226885 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.227498 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.227544 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.228682 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.228758 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.214442 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.214603 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.214759 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.214790 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.214972 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.215212 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.215233 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.215246 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.215429 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.215459 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.215894 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.215979 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.216231 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.216264 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.216577 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.216820 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.216826 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.217482 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.217856 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.218035 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.218147 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.218260 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.218463 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.218643 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.218778 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.219036 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.219453 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.219795 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.220119 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.220196 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.220338 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.220389 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.220662 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.220758 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.221388 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.229799 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.232457 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.224112 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.232494 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.232587 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.232633 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.232891 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.232662 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.233068 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.233093 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.233127 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.233154 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.224431 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.224441 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.224470 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.224536 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.224756 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.224713 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.224842 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.225785 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.226057 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.226335 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.226551 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.226630 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.226708 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.228864 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.229272 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.229406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.214302 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.229820 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.230601 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.231085 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.231531 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.232173 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.232240 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.232338 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.232415 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.233181 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.233617 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.233659 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.233693 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.233726 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.233764 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.235687 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.237163 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.237414 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.238909 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.238973 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239011 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239048 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239087 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239170 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239242 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239274 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239307 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239342 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239379 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239410 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239443 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239547 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239599 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239637 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239669 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239722 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239764 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239799 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239859 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239893 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239925 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.239962 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240022 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240056 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240090 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240250 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240271 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240292 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240303 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240310 4881 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240360 4881 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240378 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240388 4881 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.240394 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240401 4881 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240422 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240436 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240434 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240448 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.240475 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:46.740452471 +0000 UTC m=+19.219762497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240666 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240719 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.240897 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.241106 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.241483 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.241703 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.242097 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.242210 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.242001 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.242700 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.243032 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.243493 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.243358 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.243568 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.243597 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.243645 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.244113 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.244120 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.244675 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.244688 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.244976 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.245040 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.245351 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.245446 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.244137 4881 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.246210 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.246590 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.246704 4881 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.246796 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.246879 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.246964 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.247057 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.247154 4881 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.247242 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.247329 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.247415 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.247581 4881 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.247677 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.247771 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.247866 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:46.747838139 +0000 UTC m=+19.227148165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.247960 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248064 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248130 4881 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248195 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248260 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248322 4881 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248386 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248447 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248530 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248605 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248665 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248730 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248795 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248853 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248915 4881 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.248974 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249032 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249084 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249142 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249228 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249286 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249337 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249401 4881 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249457 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249534 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249598 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249658 4881 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249713 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249767 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249819 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249870 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249928 4881 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.249981 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250036 4881 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250091 4881 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250147 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250205 4881 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250260 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250312 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250367 4881 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250438 4881 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250507 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250637 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250707 4881 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250763 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250820 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250872 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250923 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.250981 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251037 4881 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251092 4881 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251143 4881 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251197 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251251 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251308 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251364 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251421 4881 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251472 4881 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251548 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251604 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251667 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251726 4881 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251783 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251839 4881 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251895 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251971 4881 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252071 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252157 4881 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252234 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252314 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252396 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252476 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252574 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252650 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252725 4881 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252801 4881 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252875 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.252947 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.253016 4881 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.259098 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262390 4881 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262432 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262446 4881 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262457 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262473 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262485 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262498 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262510 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262567 4881 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262577 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262587 4881 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262598 4881 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262608 4881 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262618 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262628 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262649 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262664 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262677 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262690 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262702 4881 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262715 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262727 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262743 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262756 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262768 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.262781 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.261970 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.258655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.260225 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.251758 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.257105 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.263044 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.263062 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.263141 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:46.763115577 +0000 UTC m=+19.242425603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.261339 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.260912 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.263487 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.263497 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.263579 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:46.763570578 +0000 UTC m=+19.242880604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.264234 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.266038 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.266896 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.267360 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.267759 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.268113 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.271832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.272430 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.272804 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.275940 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.276104 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.276315 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.281343 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.281461 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.281575 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.281647 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.282304 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.282349 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.282881 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.282897 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.282968 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.283059 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.283294 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.283318 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.283614 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.283783 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.283958 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.283978 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.284115 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.285691 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.285776 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.285795 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.287043 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.287346 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.287367 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.287623 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.287867 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.288040 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.295640 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.298670 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.308747 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.344823 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39748->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.344889 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39764->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.345165 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39764->192.168.126.11:17697: read: connection reset by peer" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.345113 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39748->192.168.126.11:17697: read: connection reset by peer" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.345656 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.345706 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.346008 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.346035 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363717 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363825 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363836 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363846 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363855 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363864 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363873 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363881 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363889 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363898 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363907 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363915 4881 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363924 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363932 4881 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363941 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363949 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363985 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363994 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364003 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.363997 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364050 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364013 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364158 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364170 4881 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364181 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364192 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364202 4881 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364213 4881 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364223 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364242 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364253 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364263 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364273 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364285 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364295 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364307 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364316 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364329 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364339 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364348 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364356 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364366 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364376 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364386 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364396 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364407 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364416 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364426 4881 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364434 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364443 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364452 4881 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364470 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364479 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364489 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364499 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364509 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364533 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364543 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364553 4881 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364562 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364573 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364582 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364591 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364608 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364618 4881 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364627 4881 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.364637 4881 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.371677 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:35:46 crc kubenswrapper[4881]: W0126 12:35:46.389740 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-4492f3f3f421c421b9891771ec0729c306cd97d7fdcef1b431931078075468fe WatchSource:0}: Error finding container 4492f3f3f421c421b9891771ec0729c306cd97d7fdcef1b431931078075468fe: Status 404 returned error can't find the container with id 4492f3f3f421c421b9891771ec0729c306cd97d7fdcef1b431931078075468fe Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.648510 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:35:46 crc kubenswrapper[4881]: W0126 12:35:46.662412 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-89e4fb93d0318ba484af6a6e11c48f776d2d12520bde70dbe448eb399f8861ab WatchSource:0}: Error finding container 89e4fb93d0318ba484af6a6e11c48f776d2d12520bde70dbe448eb399f8861ab: Status 404 returned error can't find the container with id 89e4fb93d0318ba484af6a6e11c48f776d2d12520bde70dbe448eb399f8861ab Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.663434 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:35:46 crc kubenswrapper[4881]: W0126 12:35:46.682223 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-1109ec527a1bf15b6fa48ea7564d523f60c55d6c0396fec4dbd26009f6fd5e75 WatchSource:0}: Error finding container 1109ec527a1bf15b6fa48ea7564d523f60c55d6c0396fec4dbd26009f6fd5e75: Status 404 returned error can't find the container with id 1109ec527a1bf15b6fa48ea7564d523f60c55d6c0396fec4dbd26009f6fd5e75 Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.767249 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.767326 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.767350 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.767367 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:46 crc kubenswrapper[4881]: I0126 12:35:46.767388 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.767461 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.767509 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:47.767494736 +0000 UTC m=+20.246804762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.767620 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:35:47.767581938 +0000 UTC m=+20.246891984 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.767678 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.767711 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.767878 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:47.767820963 +0000 UTC m=+20.247131029 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.767726 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.767931 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.768050 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:47.768026268 +0000 UTC m=+20.247336334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.767730 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.768105 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.768138 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:46 crc kubenswrapper[4881]: E0126 12:35:46.768244 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:47.768216043 +0000 UTC m=+20.247526149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.021871 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:05:17.849280406 +0000 UTC Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.186416 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1109ec527a1bf15b6fa48ea7564d523f60c55d6c0396fec4dbd26009f6fd5e75"} Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.187497 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"89e4fb93d0318ba484af6a6e11c48f776d2d12520bde70dbe448eb399f8861ab"} Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.189747 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e"} Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.189799 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1"} Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.189812 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4492f3f3f421c421b9891771ec0729c306cd97d7fdcef1b431931078075468fe"} Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.191913 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.194086 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154" exitCode=255 Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.194133 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154"} Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.202707 4881 scope.go:117] "RemoveContainer" containerID="a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.203073 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.205889 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.217186 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.228426 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.240221 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.251286 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.262448 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.279337 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.293411 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.303574 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.316209 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.327220 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.337826 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.349139 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.359068 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.371108 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.715289 4881 csr.go:261] certificate signing request csr-4zk98 is approved, waiting to be issued Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.731044 4881 csr.go:257] certificate signing request csr-4zk98 is issued Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.762577 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-f4b5v"] Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.762867 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-f4b5v" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.765323 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.765433 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.769009 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.775247 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.775329 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.775366 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.775395 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.775425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775531 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775575 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775534 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775630 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775595 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:49.775575567 +0000 UTC m=+22.254885593 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775644 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775653 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775672 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:49.775648178 +0000 UTC m=+22.254958244 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775683 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775697 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:49.775685209 +0000 UTC m=+22.254995345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775697 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775778 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:35:49.77573364 +0000 UTC m=+22.255043746 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:35:47 crc kubenswrapper[4881]: E0126 12:35:47.775817 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:49.775807112 +0000 UTC m=+22.255117218 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.782332 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:47Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.795274 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:47Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.816056 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:47Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.827876 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:47Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.843388 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:47Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.856603 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:47Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.869400 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:47Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.875943 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc482\" (UniqueName: \"kubernetes.io/projected/017c2a17-2267-4284-a0ca-d3c513aa9ff9-kube-api-access-pc482\") pod \"node-resolver-f4b5v\" (UID: \"017c2a17-2267-4284-a0ca-d3c513aa9ff9\") " pod="openshift-dns/node-resolver-f4b5v" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.875995 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/017c2a17-2267-4284-a0ca-d3c513aa9ff9-hosts-file\") pod \"node-resolver-f4b5v\" (UID: \"017c2a17-2267-4284-a0ca-d3c513aa9ff9\") " pod="openshift-dns/node-resolver-f4b5v" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.883980 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:47Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.908212 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:47Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.912185 4881 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 12:35:47 crc kubenswrapper[4881]: W0126 12:35:47.912684 4881 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 12:35:47 crc kubenswrapper[4881]: W0126 12:35:47.912748 4881 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 12:35:47 crc kubenswrapper[4881]: W0126 12:35:47.912788 4881 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.976376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc482\" (UniqueName: \"kubernetes.io/projected/017c2a17-2267-4284-a0ca-d3c513aa9ff9-kube-api-access-pc482\") pod \"node-resolver-f4b5v\" (UID: \"017c2a17-2267-4284-a0ca-d3c513aa9ff9\") " pod="openshift-dns/node-resolver-f4b5v" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.976409 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/017c2a17-2267-4284-a0ca-d3c513aa9ff9-hosts-file\") pod \"node-resolver-f4b5v\" (UID: \"017c2a17-2267-4284-a0ca-d3c513aa9ff9\") " pod="openshift-dns/node-resolver-f4b5v" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.976475 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/017c2a17-2267-4284-a0ca-d3c513aa9ff9-hosts-file\") pod \"node-resolver-f4b5v\" (UID: \"017c2a17-2267-4284-a0ca-d3c513aa9ff9\") " pod="openshift-dns/node-resolver-f4b5v" Jan 26 12:35:47 crc kubenswrapper[4881]: I0126 12:35:47.997439 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc482\" (UniqueName: \"kubernetes.io/projected/017c2a17-2267-4284-a0ca-d3c513aa9ff9-kube-api-access-pc482\") pod \"node-resolver-f4b5v\" (UID: \"017c2a17-2267-4284-a0ca-d3c513aa9ff9\") " pod="openshift-dns/node-resolver-f4b5v" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.022183 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:11:23.191810863 +0000 UTC Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.072606 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-f4b5v" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.081831 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.081988 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.082051 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.082193 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.082310 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.082389 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.085745 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: W0126 12:35:48.086013 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod017c2a17_2267_4284_a0ca_d3c513aa9ff9.slice/crio-79773ae810b742dd949758b89df2d12bec72092b79fae68a43f198a3d0a58da5 WatchSource:0}: Error finding container 79773ae810b742dd949758b89df2d12bec72092b79fae68a43f198a3d0a58da5: Status 404 returned error can't find the container with id 79773ae810b742dd949758b89df2d12bec72092b79fae68a43f198a3d0a58da5 Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.086739 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.088162 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.088936 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.090093 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.090676 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.091300 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.092386 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.093111 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.095720 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.096306 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.099966 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.101056 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.102213 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.103081 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.103677 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.105054 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.106083 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.106860 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.108426 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.109845 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.110471 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.111917 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.112482 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.113900 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.114633 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.115842 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.116169 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.117586 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.118873 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.120201 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.120825 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.121920 4881 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.122067 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.124276 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.124951 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.126540 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.128228 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.129825 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.130620 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.132054 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.132610 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.134707 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.135987 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.137422 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.138424 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.139081 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.139991 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.140791 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.142147 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.143242 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.144261 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.144834 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.145561 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.146640 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.147427 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.148963 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.152077 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.180830 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.199264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-f4b5v" event={"ID":"017c2a17-2267-4284-a0ca-d3c513aa9ff9","Type":"ContainerStarted","Data":"79773ae810b742dd949758b89df2d12bec72092b79fae68a43f198a3d0a58da5"} Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.201026 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.204319 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49"} Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.204539 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.205684 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c"} Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.209903 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.231161 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.247625 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.251127 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fwlbz"] Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.251495 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.252911 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.253094 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.253137 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.253238 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.253862 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.263754 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.284368 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.315846 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.336638 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.370861 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.380856 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-proxy-tls\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.381086 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-mcd-auth-proxy-config\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.381166 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-rootfs\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.381229 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c6n6\" (UniqueName: \"kubernetes.io/projected/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-kube-api-access-2c6n6\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.388652 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.412848 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.430486 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.447869 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.463785 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.482351 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-proxy-tls\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.482424 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-mcd-auth-proxy-config\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.482458 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-rootfs\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.482482 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c6n6\" (UniqueName: \"kubernetes.io/projected/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-kube-api-access-2c6n6\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.482806 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-rootfs\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.483633 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-mcd-auth-proxy-config\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.484735 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.486996 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-proxy-tls\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.513782 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c6n6\" (UniqueName: \"kubernetes.io/projected/ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19-kube-api-access-2c6n6\") pod \"machine-config-daemon-fwlbz\" (UID: \"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\") " pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.583859 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.670063 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-csrkv"] Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.670380 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.670682 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-pmwpn"] Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.671819 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.672689 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kbjm9"] Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.673389 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.673570 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.681438 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 12:35:48 crc kubenswrapper[4881]: W0126 12:35:48.683041 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.683073 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:35:48 crc kubenswrapper[4881]: W0126 12:35:48.683615 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.683657 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:35:48 crc kubenswrapper[4881]: W0126 12:35:48.683793 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.683815 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:35:48 crc kubenswrapper[4881]: W0126 12:35:48.683851 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.683862 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.690099 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.690138 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.690219 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 12:35:48 crc kubenswrapper[4881]: W0126 12:35:48.690248 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.690313 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:35:48 crc kubenswrapper[4881]: W0126 12:35:48.690344 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.690372 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.690470 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 12:35:48 crc kubenswrapper[4881]: W0126 12:35:48.690622 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.690840 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.691047 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.718081 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.732580 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 12:30:47 +0000 UTC, rotation deadline is 2026-11-04 13:47:21.98345463 +0000 UTC Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.732826 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6769h11m33.250632326s for next certificate rotation Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.738336 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.763668 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.781887 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.783963 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-system-cni-dir\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.784286 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-log-socket\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.784473 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.784602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-run-k8s-cni-cncf-io\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.784668 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-systemd-units\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.784747 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwf7z\" (UniqueName: \"kubernetes.io/projected/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-kube-api-access-wwf7z\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.784813 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-var-lib-kubelet\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.784880 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-openvswitch\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.784945 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-cni-binary-copy\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785008 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-run-netns\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785081 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-systemd\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785152 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-cnibin\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785223 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-os-release\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785285 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d24cc7d2-c2db-45ee-b405-fa56157f807c-cni-binary-copy\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785350 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-var-lib-cni-bin\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785416 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-hostroot\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785492 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-system-cni-dir\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785627 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785699 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-slash\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-socket-dir-parent\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785837 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-run-multus-certs\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785903 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-netns\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.785962 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-netd\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786031 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-config\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786112 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-etc-openvswitch\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786212 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-node-log\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786275 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d272c950-9665-4b60-98a2-20c18d02d5a2-ovn-node-metrics-cert\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786346 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-var-lib-cni-multus\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786428 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/d24cc7d2-c2db-45ee-b405-fa56157f807c-kube-api-access-x7bhq\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786503 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-var-lib-openvswitch\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786591 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-ovn\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786656 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786738 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-env-overrides\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786914 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-os-release\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.786993 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-conf-dir\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.787064 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-ovn-kubernetes\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.787159 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crn6f\" (UniqueName: \"kubernetes.io/projected/d272c950-9665-4b60-98a2-20c18d02d5a2-kube-api-access-crn6f\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.787256 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-cni-dir\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.787330 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-kubelet\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.787403 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-cnibin\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.787475 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-bin\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.787566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-script-lib\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.787650 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-daemon-config\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.787726 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-etc-kubernetes\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.794256 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.807001 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.810784 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.824696 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.838440 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.855668 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.869126 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.882941 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889284 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-conf-dir\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889332 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-ovn-kubernetes\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889355 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crn6f\" (UniqueName: \"kubernetes.io/projected/d272c950-9665-4b60-98a2-20c18d02d5a2-kube-api-access-crn6f\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889375 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-cni-dir\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889395 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-kubelet\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889463 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-cnibin\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889489 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-bin\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889494 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-ovn-kubernetes\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889615 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-cnibin\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889612 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-kubelet\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889663 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-bin\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889511 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-script-lib\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889754 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-daemon-config\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889781 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-etc-kubernetes\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889782 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-cni-dir\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889805 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-system-cni-dir\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889843 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-etc-kubernetes\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889868 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-log-socket\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889908 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889928 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-run-k8s-cni-cncf-io\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889946 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-systemd-units\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-log-socket\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889964 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwf7z\" (UniqueName: \"kubernetes.io/projected/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-kube-api-access-wwf7z\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889985 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-var-lib-kubelet\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889995 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-systemd-units\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890001 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-openvswitch\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890024 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-run-netns\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890026 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-run-k8s-cni-cncf-io\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890041 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-systemd\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890059 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-var-lib-kubelet\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890060 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-cni-binary-copy\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890099 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-cnibin\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890135 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-os-release\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890164 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d24cc7d2-c2db-45ee-b405-fa56157f807c-cni-binary-copy\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890181 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-var-lib-cni-bin\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890198 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-hostroot\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890215 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-system-cni-dir\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890237 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890237 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890282 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-slash\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890306 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-socket-dir-parent\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890323 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-run-multus-certs\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890340 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-netns\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890355 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-netd\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-config\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890398 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-etc-openvswitch\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890396 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-daemon-config\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890436 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-node-log\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890455 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d272c950-9665-4b60-98a2-20c18d02d5a2-ovn-node-metrics-cert\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-var-lib-cni-multus\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/d24cc7d2-c2db-45ee-b405-fa56157f807c-kube-api-access-x7bhq\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890506 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-var-lib-openvswitch\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890543 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-ovn\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890562 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890580 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-env-overrides\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-os-release\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890816 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-cni-binary-copy\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890816 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-os-release\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-os-release\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890860 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-slash\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.889947 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-system-cni-dir\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890875 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-run-netns\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890929 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-etc-openvswitch\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890939 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-systemd\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890940 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-conf-dir\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890957 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-var-lib-openvswitch\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890968 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-run-multus-certs\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890977 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-ovn\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890979 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-hostroot\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890994 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-var-lib-cni-multus\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.890984 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-netns\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.891019 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-netd\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.891001 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-system-cni-dir\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.891026 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.891050 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-host-var-lib-cni-bin\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.891058 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-openvswitch\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.891048 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.891023 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d24cc7d2-c2db-45ee-b405-fa56157f807c-multus-socket-dir-parent\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.891037 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-cnibin\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.891090 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d24cc7d2-c2db-45ee-b405-fa56157f807c-cni-binary-copy\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.891109 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-node-log\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.893229 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.896706 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.906155 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.907733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.907764 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.907777 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.907881 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.908450 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.909055 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwf7z\" (UniqueName: \"kubernetes.io/projected/bb5ecb63-1238-44dc-9c40-b5e5dd7d4847-kube-api-access-wwf7z\") pod \"multus-additional-cni-plugins-pmwpn\" (UID: \"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\") " pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.914271 4881 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.914545 4881 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.914651 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/d24cc7d2-c2db-45ee-b405-fa56157f807c-kube-api-access-x7bhq\") pod \"multus-csrkv\" (UID: \"d24cc7d2-c2db-45ee-b405-fa56157f807c\") " pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.915508 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.915556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.915565 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.915581 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.915590 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:48Z","lastTransitionTime":"2026-01-26T12:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.921796 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.932587 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.935132 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.936132 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.936155 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.936164 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.936180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.936189 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:48Z","lastTransitionTime":"2026-01-26T12:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.949235 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.950411 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.952689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.952722 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.952732 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.952746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.952757 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:48Z","lastTransitionTime":"2026-01-26T12:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.966039 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.968919 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.969468 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.969529 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.969540 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.969557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.969569 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:48Z","lastTransitionTime":"2026-01-26T12:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.980489 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-csrkv" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.985751 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: E0126 12:35:48.986607 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.988630 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.992589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.992626 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.992634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.992648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:48 crc kubenswrapper[4881]: I0126 12:35:48.992658 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:48Z","lastTransitionTime":"2026-01-26T12:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:48 crc kubenswrapper[4881]: W0126 12:35:48.994902 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd24cc7d2_c2db_45ee_b405_fa56157f807c.slice/crio-d2cdcefd668ab870cc62e887ee585a32cf66fa54dc9b8c5b65728a89d78ff4e6 WatchSource:0}: Error finding container d2cdcefd668ab870cc62e887ee585a32cf66fa54dc9b8c5b65728a89d78ff4e6: Status 404 returned error can't find the container with id d2cdcefd668ab870cc62e887ee585a32cf66fa54dc9b8c5b65728a89d78ff4e6 Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.002712 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.005668 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.006084 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.010560 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.010600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.010610 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.010626 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.010655 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:49Z","lastTransitionTime":"2026-01-26T12:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.022916 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:05:36.792867813 +0000 UTC Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.026288 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.042537 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.057665 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.075578 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.088263 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.112980 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.113023 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.113036 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.113053 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.113065 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:49Z","lastTransitionTime":"2026-01-26T12:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.209407 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csrkv" event={"ID":"d24cc7d2-c2db-45ee-b405-fa56157f807c","Type":"ContainerStarted","Data":"e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.209496 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csrkv" event={"ID":"d24cc7d2-c2db-45ee-b405-fa56157f807c","Type":"ContainerStarted","Data":"d2cdcefd668ab870cc62e887ee585a32cf66fa54dc9b8c5b65728a89d78ff4e6"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.212499 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-f4b5v" event={"ID":"017c2a17-2267-4284-a0ca-d3c513aa9ff9","Type":"ContainerStarted","Data":"5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.215095 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.215123 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.215131 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.215144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.215154 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:49Z","lastTransitionTime":"2026-01-26T12:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.215432 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" event={"ID":"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847","Type":"ContainerStarted","Data":"b44d257425e96db8e3d2d0c701beab7c39dab21653834ce4aef2dd9d66f00364"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.216757 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.216802 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.216831 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"acef4e132502cb2e5b79f61ef39cb090631783297fa347bd6aee3d6a3965248d"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.220676 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.227873 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.239879 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.257719 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.273437 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.277738 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.301774 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.315407 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.316945 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.316970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.316978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.316993 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.317005 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:49Z","lastTransitionTime":"2026-01-26T12:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.329578 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.356954 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.378484 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.394279 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.414913 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.419094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.419124 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.419133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.419148 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.419158 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:49Z","lastTransitionTime":"2026-01-26T12:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.435460 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.468015 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.480879 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.493550 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.512888 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.514477 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.521139 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.521169 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.521177 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.521190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.521200 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:49Z","lastTransitionTime":"2026-01-26T12:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.527610 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.536983 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.548683 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.572628 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.574757 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.581583 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-script-lib\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.584631 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.600549 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.618437 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.623047 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.623076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.623088 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.623102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.623112 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:49Z","lastTransitionTime":"2026-01-26T12:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.634896 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.639765 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.645683 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.661034 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.674368 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.691935 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:49Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.692016 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.725263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.725300 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.725310 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.725326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.725340 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:49Z","lastTransitionTime":"2026-01-26T12:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.750932 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.761310 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crn6f\" (UniqueName: \"kubernetes.io/projected/d272c950-9665-4b60-98a2-20c18d02d5a2-kube-api-access-crn6f\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.801891 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.802038 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.802079 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802141 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:35:53.802103728 +0000 UTC m=+26.281413784 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802199 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.802266 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.802328 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802369 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802490 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802510 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802537 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802393 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:53.802376034 +0000 UTC m=+26.281686060 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802393 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802576 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:53.802559979 +0000 UTC m=+26.281870035 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802588 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802598 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802601 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:53.80259034 +0000 UTC m=+26.281900406 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.802631 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:53.802619901 +0000 UTC m=+26.281930057 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.827634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.827667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.827678 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.827695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.827708 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:49Z","lastTransitionTime":"2026-01-26T12:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.874368 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.882068 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-env-overrides\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.891777 4881 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.891810 4881 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.891877 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-config podName:d272c950-9665-4b60-98a2-20c18d02d5a2 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:50.391854537 +0000 UTC m=+22.871164643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-config") pod "ovnkube-node-kbjm9" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2") : failed to sync configmap cache: timed out waiting for the condition Jan 26 12:35:49 crc kubenswrapper[4881]: E0126 12:35:49.891902 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d272c950-9665-4b60-98a2-20c18d02d5a2-ovn-node-metrics-cert podName:d272c950-9665-4b60-98a2-20c18d02d5a2 nodeName:}" failed. No retries permitted until 2026-01-26 12:35:50.391890228 +0000 UTC m=+22.871200364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/d272c950-9665-4b60-98a2-20c18d02d5a2-ovn-node-metrics-cert") pod "ovnkube-node-kbjm9" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2") : failed to sync secret cache: timed out waiting for the condition Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.930772 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.930849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.930863 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.930881 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:49 crc kubenswrapper[4881]: I0126 12:35:49.930898 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:49Z","lastTransitionTime":"2026-01-26T12:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.023708 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:09:55.173320095 +0000 UTC Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.034006 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.034051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.034062 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.034077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.034087 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:50Z","lastTransitionTime":"2026-01-26T12:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.067528 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.081733 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.081787 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.081824 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:50 crc kubenswrapper[4881]: E0126 12:35:50.081890 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:35:50 crc kubenswrapper[4881]: E0126 12:35:50.081968 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:35:50 crc kubenswrapper[4881]: E0126 12:35:50.082136 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.096093 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.136998 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.137036 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.137045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.137059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.137072 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:50Z","lastTransitionTime":"2026-01-26T12:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.221389 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.223202 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb5ecb63-1238-44dc-9c40-b5e5dd7d4847" containerID="b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d" exitCode=0 Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.223298 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" event={"ID":"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847","Type":"ContainerDied","Data":"b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d"} Jan 26 12:35:50 crc kubenswrapper[4881]: E0126 12:35:50.232402 4881 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.234469 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.239643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.239681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.239689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.239708 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.239717 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:50Z","lastTransitionTime":"2026-01-26T12:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.251494 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.267708 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.279560 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.296804 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.309251 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.318884 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.331666 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.343973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.344012 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.344024 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.344042 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.344054 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:50Z","lastTransitionTime":"2026-01-26T12:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.347322 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.361254 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.379120 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.395784 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.408544 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.409034 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-config\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.409109 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d272c950-9665-4b60-98a2-20c18d02d5a2-ovn-node-metrics-cert\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.410060 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-config\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.414342 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d272c950-9665-4b60-98a2-20c18d02d5a2-ovn-node-metrics-cert\") pod \"ovnkube-node-kbjm9\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.420992 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.431731 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.447026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.447076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.447097 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.447116 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.447128 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:50Z","lastTransitionTime":"2026-01-26T12:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.447647 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.492583 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.498710 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: W0126 12:35:50.506886 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd272c950_9665_4b60_98a2_20c18d02d5a2.slice/crio-6d18357b4d8f7dd19f67e03cfd005ebef61ac08d9898c2f87bd04958383c210d WatchSource:0}: Error finding container 6d18357b4d8f7dd19f67e03cfd005ebef61ac08d9898c2f87bd04958383c210d: Status 404 returned error can't find the container with id 6d18357b4d8f7dd19f67e03cfd005ebef61ac08d9898c2f87bd04958383c210d Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.535988 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.549146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.549189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.549198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.549213 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.549223 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:50Z","lastTransitionTime":"2026-01-26T12:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.560479 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.601199 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.646990 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.652219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.652267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.652279 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.652296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.652309 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:50Z","lastTransitionTime":"2026-01-26T12:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.680130 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.721630 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.754654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.754703 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.754728 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.754744 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.754756 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:50Z","lastTransitionTime":"2026-01-26T12:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.762682 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.802128 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.842696 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.857085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.857155 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.857168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.857192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.857207 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:50Z","lastTransitionTime":"2026-01-26T12:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.882644 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.921132 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.960971 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.961015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.961025 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.961044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:50 crc kubenswrapper[4881]: I0126 12:35:50.961054 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:50Z","lastTransitionTime":"2026-01-26T12:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.025084 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:32:05.063831104 +0000 UTC Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.064317 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.064369 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.064380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.064400 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.064410 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:51Z","lastTransitionTime":"2026-01-26T12:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.166948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.167352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.167361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.167378 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.167390 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:51Z","lastTransitionTime":"2026-01-26T12:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.229683 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb5ecb63-1238-44dc-9c40-b5e5dd7d4847" containerID="e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab" exitCode=0 Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.229792 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" event={"ID":"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847","Type":"ContainerDied","Data":"e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.231827 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875" exitCode=0 Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.231915 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.231984 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"6d18357b4d8f7dd19f67e03cfd005ebef61ac08d9898c2f87bd04958383c210d"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.242279 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.253989 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.270283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.270332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.270346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.270365 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.270379 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:51Z","lastTransitionTime":"2026-01-26T12:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.278444 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.293255 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.307234 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.324459 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.335917 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.351484 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.364589 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.372377 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.372408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.372419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.372435 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.372446 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:51Z","lastTransitionTime":"2026-01-26T12:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.379540 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.400379 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.413659 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.442448 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.475110 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.475435 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.475715 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.475859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.475975 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:51Z","lastTransitionTime":"2026-01-26T12:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.481850 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.527206 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.562128 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.579156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.579223 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.579247 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.579280 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.579301 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:51Z","lastTransitionTime":"2026-01-26T12:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.605986 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.651099 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.681868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.681907 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.681916 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.681930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.681950 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:51Z","lastTransitionTime":"2026-01-26T12:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.684282 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.715872 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tvrtr"] Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.716227 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.724436 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.751967 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.752309 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.772159 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.785851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.785898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.785909 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.785925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.785938 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:51Z","lastTransitionTime":"2026-01-26T12:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.792368 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.824852 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ba5262-b5d8-4c86-85db-0993c88afc38-host\") pod \"node-ca-tvrtr\" (UID: \"62ba5262-b5d8-4c86-85db-0993c88afc38\") " pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.824901 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/62ba5262-b5d8-4c86-85db-0993c88afc38-serviceca\") pod \"node-ca-tvrtr\" (UID: \"62ba5262-b5d8-4c86-85db-0993c88afc38\") " pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.824934 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj9qc\" (UniqueName: \"kubernetes.io/projected/62ba5262-b5d8-4c86-85db-0993c88afc38-kube-api-access-sj9qc\") pod \"node-ca-tvrtr\" (UID: \"62ba5262-b5d8-4c86-85db-0993c88afc38\") " pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.842322 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.888279 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.888320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.888330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.888357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.888369 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:51Z","lastTransitionTime":"2026-01-26T12:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.889851 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.921795 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.926092 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ba5262-b5d8-4c86-85db-0993c88afc38-host\") pod \"node-ca-tvrtr\" (UID: \"62ba5262-b5d8-4c86-85db-0993c88afc38\") " pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.926120 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/62ba5262-b5d8-4c86-85db-0993c88afc38-serviceca\") pod \"node-ca-tvrtr\" (UID: \"62ba5262-b5d8-4c86-85db-0993c88afc38\") " pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.926139 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj9qc\" (UniqueName: \"kubernetes.io/projected/62ba5262-b5d8-4c86-85db-0993c88afc38-kube-api-access-sj9qc\") pod \"node-ca-tvrtr\" (UID: \"62ba5262-b5d8-4c86-85db-0993c88afc38\") " pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.926313 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ba5262-b5d8-4c86-85db-0993c88afc38-host\") pod \"node-ca-tvrtr\" (UID: \"62ba5262-b5d8-4c86-85db-0993c88afc38\") " pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.927172 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/62ba5262-b5d8-4c86-85db-0993c88afc38-serviceca\") pod \"node-ca-tvrtr\" (UID: \"62ba5262-b5d8-4c86-85db-0993c88afc38\") " pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.969869 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj9qc\" (UniqueName: \"kubernetes.io/projected/62ba5262-b5d8-4c86-85db-0993c88afc38-kube-api-access-sj9qc\") pod \"node-ca-tvrtr\" (UID: \"62ba5262-b5d8-4c86-85db-0993c88afc38\") " pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.981441 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.990973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.991025 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.991038 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.991058 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:51 crc kubenswrapper[4881]: I0126 12:35:51.991073 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:51Z","lastTransitionTime":"2026-01-26T12:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.018317 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.025401 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:40:30.792409769 +0000 UTC Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.062854 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.081879 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.081942 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.081947 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:52 crc kubenswrapper[4881]: E0126 12:35:52.082038 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:35:52 crc kubenswrapper[4881]: E0126 12:35:52.082231 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:35:52 crc kubenswrapper[4881]: E0126 12:35:52.082335 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.093966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.094028 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.094044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.094064 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.094078 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:52Z","lastTransitionTime":"2026-01-26T12:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.101114 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.135984 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tvrtr" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.149119 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: W0126 12:35:52.149879 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62ba5262_b5d8_4c86_85db_0993c88afc38.slice/crio-801df7697a7af22c28e01975a139e03572a6512c1da1e9a7428ac513fd51bf1b WatchSource:0}: Error finding container 801df7697a7af22c28e01975a139e03572a6512c1da1e9a7428ac513fd51bf1b: Status 404 returned error can't find the container with id 801df7697a7af22c28e01975a139e03572a6512c1da1e9a7428ac513fd51bf1b Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.181072 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.196177 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.196226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.196242 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.196259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.196272 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:52Z","lastTransitionTime":"2026-01-26T12:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.224400 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.238008 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.238054 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.238068 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.238103 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.239012 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tvrtr" event={"ID":"62ba5262-b5d8-4c86-85db-0993c88afc38","Type":"ContainerStarted","Data":"801df7697a7af22c28e01975a139e03572a6512c1da1e9a7428ac513fd51bf1b"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.241579 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb5ecb63-1238-44dc-9c40-b5e5dd7d4847" containerID="9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a" exitCode=0 Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.241621 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" event={"ID":"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847","Type":"ContainerDied","Data":"9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.259533 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.298638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.299235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.299439 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.299688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.299897 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:52Z","lastTransitionTime":"2026-01-26T12:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.307025 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.340890 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.380870 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.402274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.402325 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.402336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.402354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.402366 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:52Z","lastTransitionTime":"2026-01-26T12:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.419497 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.495525 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.504553 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.504577 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.504586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.504600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.504609 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:52Z","lastTransitionTime":"2026-01-26T12:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.526241 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.543060 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.579751 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.607364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.607393 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.607402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.607419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.607432 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:52Z","lastTransitionTime":"2026-01-26T12:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.621230 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.658677 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.713489 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.713701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.713761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.713817 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.713869 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:52Z","lastTransitionTime":"2026-01-26T12:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.719307 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.754709 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.780606 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.816332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.816379 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.816394 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.816412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.816441 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:52Z","lastTransitionTime":"2026-01-26T12:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.825764 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.863332 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.904942 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.918868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.918905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.918913 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.918927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.918937 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:52Z","lastTransitionTime":"2026-01-26T12:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.943652 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:52 crc kubenswrapper[4881]: I0126 12:35:52.982726 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.022074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.022136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.022145 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.022165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.022178 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:53Z","lastTransitionTime":"2026-01-26T12:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.023413 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.025510 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:54:11.377956414 +0000 UTC Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.076079 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.102770 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.124482 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.124554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.124575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.124601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.124618 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:53Z","lastTransitionTime":"2026-01-26T12:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.143239 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.180913 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.223078 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.226962 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.226991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.227002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.227022 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.227035 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:53Z","lastTransitionTime":"2026-01-26T12:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.247083 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb5ecb63-1238-44dc-9c40-b5e5dd7d4847" containerID="7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add" exitCode=0 Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.247131 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" event={"ID":"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847","Type":"ContainerDied","Data":"7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.254028 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tvrtr" event={"ID":"62ba5262-b5d8-4c86-85db-0993c88afc38","Type":"ContainerStarted","Data":"5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.260304 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.260355 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.267162 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.301251 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.329978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.330011 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.330019 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.330034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.330043 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:53Z","lastTransitionTime":"2026-01-26T12:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.339017 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.381039 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.421618 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.433277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.433308 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.433317 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.433329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.433338 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:53Z","lastTransitionTime":"2026-01-26T12:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.459986 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.502424 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.535623 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.535664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.535674 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.535688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.535697 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:53Z","lastTransitionTime":"2026-01-26T12:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.551172 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.583876 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.622361 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.637963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.638012 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.638045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.638070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.638083 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:53Z","lastTransitionTime":"2026-01-26T12:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.673643 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.704491 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.739277 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.741343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.741388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.741404 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.741429 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.741446 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:53Z","lastTransitionTime":"2026-01-26T12:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.783067 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.819173 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.842843 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.842966 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.843015 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.843053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843077 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:36:01.843058959 +0000 UTC m=+34.322368985 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.843100 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843162 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843167 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843191 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:01.843184902 +0000 UTC m=+34.322494928 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843207 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:01.843195872 +0000 UTC m=+34.322505908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843339 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843355 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843414 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843437 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843385 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843585 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843591 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:01.843564132 +0000 UTC m=+34.322874198 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:53 crc kubenswrapper[4881]: E0126 12:35:53.843687 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:01.843657714 +0000 UTC m=+34.322967830 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.844295 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.844344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.844361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.844382 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.844397 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:53Z","lastTransitionTime":"2026-01-26T12:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.864977 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.904169 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.940412 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.946862 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.947089 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.947102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.947126 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:53 crc kubenswrapper[4881]: I0126 12:35:53.947152 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:53Z","lastTransitionTime":"2026-01-26T12:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.026053 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:50:51.073277949 +0000 UTC Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.049100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.049139 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.049151 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.049167 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.049178 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:54Z","lastTransitionTime":"2026-01-26T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.082122 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.082131 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:54 crc kubenswrapper[4881]: E0126 12:35:54.082294 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:35:54 crc kubenswrapper[4881]: E0126 12:35:54.082394 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.082131 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:54 crc kubenswrapper[4881]: E0126 12:35:54.082537 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.151453 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.151509 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.151547 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.151570 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.151583 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:54Z","lastTransitionTime":"2026-01-26T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.253782 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.253833 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.253851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.253876 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.253894 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:54Z","lastTransitionTime":"2026-01-26T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.267795 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb5ecb63-1238-44dc-9c40-b5e5dd7d4847" containerID="45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd" exitCode=0 Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.268002 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" event={"ID":"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847","Type":"ContainerDied","Data":"45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.286855 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.300257 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.316564 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.327066 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.337726 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.353260 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.356701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.356727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.356737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.356752 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.356762 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:54Z","lastTransitionTime":"2026-01-26T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.368455 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.384798 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.399132 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.414626 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.427453 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.451420 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.458780 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.458811 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.458822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.458839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.458852 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:54Z","lastTransitionTime":"2026-01-26T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.465567 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.503818 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.555875 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:54Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.562244 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.562273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.562282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.562296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.562305 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:54Z","lastTransitionTime":"2026-01-26T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.667291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.667326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.667334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.667350 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.667359 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:54Z","lastTransitionTime":"2026-01-26T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.770356 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.770416 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.770433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.770456 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.770473 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:54Z","lastTransitionTime":"2026-01-26T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.873757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.873830 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.873849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.873873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.873890 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:54Z","lastTransitionTime":"2026-01-26T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.977150 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.977232 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.977249 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.977271 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:54 crc kubenswrapper[4881]: I0126 12:35:54.977289 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:54Z","lastTransitionTime":"2026-01-26T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.026214 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:31:59.048415203 +0000 UTC Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.079929 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.079983 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.080001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.080027 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.080046 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:55Z","lastTransitionTime":"2026-01-26T12:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.182103 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.182151 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.182162 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.182180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.182191 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:55Z","lastTransitionTime":"2026-01-26T12:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.287147 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb5ecb63-1238-44dc-9c40-b5e5dd7d4847" containerID="10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6" exitCode=0 Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.287232 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" event={"ID":"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847","Type":"ContainerDied","Data":"10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6"} Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.289494 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.289588 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.289617 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.289644 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.289667 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:55Z","lastTransitionTime":"2026-01-26T12:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.303082 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.320226 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.337389 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.359111 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.379830 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.393385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.393440 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.393457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.393480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.393496 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:55Z","lastTransitionTime":"2026-01-26T12:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.395246 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.407917 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.422349 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.438938 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.455064 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.475859 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.490258 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.496928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.496971 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.496983 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.497001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.497014 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:55Z","lastTransitionTime":"2026-01-26T12:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.507828 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.525537 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.537944 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:55Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.599369 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.599400 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.599411 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.599425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.599435 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:55Z","lastTransitionTime":"2026-01-26T12:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.701624 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.701666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.701677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.701691 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.701701 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:55Z","lastTransitionTime":"2026-01-26T12:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.804678 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.804735 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.804784 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.804804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.804818 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:55Z","lastTransitionTime":"2026-01-26T12:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.907747 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.907809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.907826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.907846 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:55 crc kubenswrapper[4881]: I0126 12:35:55.907865 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:55Z","lastTransitionTime":"2026-01-26T12:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.011433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.011488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.011505 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.011562 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.011580 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:56Z","lastTransitionTime":"2026-01-26T12:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.026882 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:31:38.813372148 +0000 UTC Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.082481 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.082675 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.082738 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:56 crc kubenswrapper[4881]: E0126 12:35:56.082764 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:35:56 crc kubenswrapper[4881]: E0126 12:35:56.082828 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:35:56 crc kubenswrapper[4881]: E0126 12:35:56.082913 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.114108 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.114147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.114159 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.114177 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.114189 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:56Z","lastTransitionTime":"2026-01-26T12:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.217353 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.217411 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.217428 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.217451 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.217468 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:56Z","lastTransitionTime":"2026-01-26T12:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.294814 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" event={"ID":"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847","Type":"ContainerStarted","Data":"428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.302247 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.319407 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.320003 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.320033 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.320045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.320063 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.320074 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:56Z","lastTransitionTime":"2026-01-26T12:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.338794 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.357299 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.375298 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.388454 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.402325 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.414306 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.422879 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.422941 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.422953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.422982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.422997 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:56Z","lastTransitionTime":"2026-01-26T12:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.425608 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.437016 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.450219 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.465552 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.481586 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.510431 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.525824 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.525875 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.525892 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.525915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.525930 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:56Z","lastTransitionTime":"2026-01-26T12:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.528332 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.546858 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.629846 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.629908 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.629925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.629950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.629969 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:56Z","lastTransitionTime":"2026-01-26T12:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.732845 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.732926 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.732950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.732981 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.733005 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:56Z","lastTransitionTime":"2026-01-26T12:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.835760 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.835806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.835825 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.835847 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.835864 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:56Z","lastTransitionTime":"2026-01-26T12:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.939011 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.939075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.939093 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.939118 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:56 crc kubenswrapper[4881]: I0126 12:35:56.939138 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:56Z","lastTransitionTime":"2026-01-26T12:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.027072 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:28:35.227116275 +0000 UTC Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.042174 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.042237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.042254 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.042277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.042295 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:57Z","lastTransitionTime":"2026-01-26T12:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.145421 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.145486 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.145503 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.145567 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.145584 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:57Z","lastTransitionTime":"2026-01-26T12:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.249579 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.249647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.249673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.249701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.249723 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:57Z","lastTransitionTime":"2026-01-26T12:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.353376 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.353651 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.353739 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.353864 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.353960 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:57Z","lastTransitionTime":"2026-01-26T12:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.456622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.456929 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.456956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.456986 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.457009 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:57Z","lastTransitionTime":"2026-01-26T12:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.560107 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.560153 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.560165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.560185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.560200 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:57Z","lastTransitionTime":"2026-01-26T12:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.663029 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.663087 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.663109 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.663137 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.663162 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:57Z","lastTransitionTime":"2026-01-26T12:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.767450 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.767500 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.767538 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.767561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.767578 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:57Z","lastTransitionTime":"2026-01-26T12:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.869927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.869959 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.869968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.869982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.869992 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:57Z","lastTransitionTime":"2026-01-26T12:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.978136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.978186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.978198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.978216 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:57 crc kubenswrapper[4881]: I0126 12:35:57.978228 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:57Z","lastTransitionTime":"2026-01-26T12:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.036487 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:59:53.555535279 +0000 UTC Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.081781 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:35:58 crc kubenswrapper[4881]: E0126 12:35:58.081933 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.082599 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.082657 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:35:58 crc kubenswrapper[4881]: E0126 12:35:58.082779 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:35:58 crc kubenswrapper[4881]: E0126 12:35:58.082884 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.082990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.083037 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.083053 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.083079 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.083098 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:58Z","lastTransitionTime":"2026-01-26T12:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.105928 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.126792 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.144735 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.155911 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.176832 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.185427 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.185470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.185487 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.185510 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.185563 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:58Z","lastTransitionTime":"2026-01-26T12:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.196554 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.217837 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.237070 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.263021 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.282741 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.287567 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.287599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.287608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.287621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.287631 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:58Z","lastTransitionTime":"2026-01-26T12:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.299748 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.312356 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17"} Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.312726 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.312763 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.315189 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.327129 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.338152 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.341481 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.365000 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.376501 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.389189 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.390194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.390227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.390242 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.390259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.390271 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:58Z","lastTransitionTime":"2026-01-26T12:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.404169 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.417391 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.428785 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.445082 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.459428 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.478498 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.494673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.494725 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.494738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.494761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.494777 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:58Z","lastTransitionTime":"2026-01-26T12:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.502924 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.522939 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.545663 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.560620 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.573790 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.587110 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.597217 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.597285 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.597301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.597324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.597338 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:58Z","lastTransitionTime":"2026-01-26T12:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.612677 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.699358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.699432 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.699450 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.699476 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.699495 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:58Z","lastTransitionTime":"2026-01-26T12:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.801859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.801898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.801910 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.801926 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.801937 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:58Z","lastTransitionTime":"2026-01-26T12:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.905974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.906043 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.906068 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.906099 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:58 crc kubenswrapper[4881]: I0126 12:35:58.906122 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:58Z","lastTransitionTime":"2026-01-26T12:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.009093 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.009143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.009156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.009177 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.009192 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.036950 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:00:36.28909498 +0000 UTC Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.111263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.111306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.111326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.111346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.111358 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.149585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.149677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.149696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.149720 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.149737 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: E0126 12:35:59.163898 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.167506 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.167575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.167591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.167614 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.167630 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: E0126 12:35:59.179315 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.183277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.183331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.183348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.183371 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.183390 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: E0126 12:35:59.202929 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.206578 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.206621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.206633 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.206650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.206661 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: E0126 12:35:59.220419 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.224693 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.224718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.224731 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.224745 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.224755 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: E0126 12:35:59.237416 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: E0126 12:35:59.237579 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.239208 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.239230 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.239239 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.239254 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.239264 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.322849 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.341735 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.341789 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.341801 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.341822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.341835 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.395485 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.411682 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.440670 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.444034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.444245 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.444327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.444416 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.444529 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.458597 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.477078 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.498588 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.513771 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.529250 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.546738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.546813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.546837 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.546869 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.546893 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.552934 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.572743 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.590728 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.616042 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.633288 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.650013 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.650069 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.650079 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.650100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.650112 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.653761 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.678636 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.691929 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:35:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.753432 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.753494 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.753511 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.753554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.753571 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.856840 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.857885 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.857907 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.857927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.857942 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.961487 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.961539 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.961548 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.961563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:35:59 crc kubenswrapper[4881]: I0126 12:35:59.961573 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:35:59Z","lastTransitionTime":"2026-01-26T12:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.037559 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:53:37.802364961 +0000 UTC Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.064663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.064698 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.064711 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.064726 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.064736 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:00Z","lastTransitionTime":"2026-01-26T12:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.084234 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.084302 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.084391 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:00 crc kubenswrapper[4881]: E0126 12:36:00.084784 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:00 crc kubenswrapper[4881]: E0126 12:36:00.085038 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:00 crc kubenswrapper[4881]: E0126 12:36:00.084998 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.167581 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.167626 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.167636 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.167653 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.167662 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:00Z","lastTransitionTime":"2026-01-26T12:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.271190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.271236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.271250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.271274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.271291 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:00Z","lastTransitionTime":"2026-01-26T12:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.375271 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.375320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.375331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.375351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.375364 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:00Z","lastTransitionTime":"2026-01-26T12:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.478255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.478302 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.478319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.478347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.478365 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:00Z","lastTransitionTime":"2026-01-26T12:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.583552 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.583801 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.583915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.584001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.584081 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:00Z","lastTransitionTime":"2026-01-26T12:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.687289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.687415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.687442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.687474 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.687499 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:00Z","lastTransitionTime":"2026-01-26T12:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.791797 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.791858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.791871 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.791892 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.791912 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:00Z","lastTransitionTime":"2026-01-26T12:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.894856 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.894919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.894940 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.894968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.894987 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:00Z","lastTransitionTime":"2026-01-26T12:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.998145 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.998195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.998205 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.998250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:00 crc kubenswrapper[4881]: I0126 12:36:00.998266 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:00Z","lastTransitionTime":"2026-01-26T12:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.038100 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 20:36:51.02609206 +0000 UTC Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.101837 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.101928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.101957 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.101997 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.102027 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:01Z","lastTransitionTime":"2026-01-26T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.206107 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.206158 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.206177 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.206196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.206210 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:01Z","lastTransitionTime":"2026-01-26T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.310334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.310735 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.310877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.311013 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.311239 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:01Z","lastTransitionTime":"2026-01-26T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.415943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.416007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.416027 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.416063 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.416081 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:01Z","lastTransitionTime":"2026-01-26T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.519882 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.519946 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.519964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.519989 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.520009 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:01Z","lastTransitionTime":"2026-01-26T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.623118 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.623205 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.623224 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.623297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.623317 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:01Z","lastTransitionTime":"2026-01-26T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.725586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.725634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.725645 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.725666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.725682 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:01Z","lastTransitionTime":"2026-01-26T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.829287 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.829351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.829366 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.829391 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.829406 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:01Z","lastTransitionTime":"2026-01-26T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.862062 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.862374 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.862418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.862455 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.862495 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862587 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862656 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862695 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:17.862665643 +0000 UTC m=+50.341975669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862676 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862786 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:17.862747195 +0000 UTC m=+50.342057451 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862790 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862828 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862835 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862876 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862898 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.862913 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:17.862893068 +0000 UTC m=+50.342203344 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.863057 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:36:17.863037802 +0000 UTC m=+50.342348038 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:36:01 crc kubenswrapper[4881]: E0126 12:36:01.863079 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:17.863067282 +0000 UTC m=+50.342377548 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.932288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.932357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.932373 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.932393 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:01 crc kubenswrapper[4881]: I0126 12:36:01.932408 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:01Z","lastTransitionTime":"2026-01-26T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.035503 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.035607 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.035620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.035644 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.035659 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:02Z","lastTransitionTime":"2026-01-26T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.039059 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:56:20.15815703 +0000 UTC Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.082148 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.082309 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:02 crc kubenswrapper[4881]: E0126 12:36:02.082502 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.082619 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:02 crc kubenswrapper[4881]: E0126 12:36:02.082659 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:02 crc kubenswrapper[4881]: E0126 12:36:02.082994 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.140954 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.141023 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.141044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.141076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.141095 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:02Z","lastTransitionTime":"2026-01-26T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.244113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.244170 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.244187 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.244202 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.244232 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:02Z","lastTransitionTime":"2026-01-26T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.330360 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/0.log" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.333910 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17" exitCode=1 Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.333965 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.334833 4881 scope.go:117] "RemoveContainer" containerID="45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.346586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.346626 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.346638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.346660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.346674 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:02Z","lastTransitionTime":"2026-01-26T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.356742 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.375034 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.388323 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5"] Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.388813 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.391160 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.391425 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.396756 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.415053 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.434280 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.448024 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.450239 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.450284 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.450297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.450318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.450330 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:02Z","lastTransitionTime":"2026-01-26T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.459710 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.468464 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/939b570a-38ce-49f8-8518-1ab500c4e449-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.468602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/939b570a-38ce-49f8-8518-1ab500c4e449-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.468651 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/939b570a-38ce-49f8-8518-1ab500c4e449-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.468682 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjlp\" (UniqueName: \"kubernetes.io/projected/939b570a-38ce-49f8-8518-1ab500c4e449-kube-api-access-xwjlp\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.475949 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.504904 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.523054 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.539199 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.553067 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.554376 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.554426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.554438 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.554458 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.554470 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:02Z","lastTransitionTime":"2026-01-26T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.566458 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.569390 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/939b570a-38ce-49f8-8518-1ab500c4e449-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.569437 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwjlp\" (UniqueName: \"kubernetes.io/projected/939b570a-38ce-49f8-8518-1ab500c4e449-kube-api-access-xwjlp\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.569493 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/939b570a-38ce-49f8-8518-1ab500c4e449-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.569532 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/939b570a-38ce-49f8-8518-1ab500c4e449-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.570176 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/939b570a-38ce-49f8-8518-1ab500c4e449-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.570950 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/939b570a-38ce-49f8-8518-1ab500c4e449-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.577112 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/939b570a-38ce-49f8-8518-1ab500c4e449-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.590922 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.604631 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwjlp\" (UniqueName: \"kubernetes.io/projected/939b570a-38ce-49f8-8518-1ab500c4e449-kube-api-access-xwjlp\") pod \"ovnkube-control-plane-749d76644c-7m2c5\" (UID: \"939b570a-38ce-49f8-8518-1ab500c4e449\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.615132 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:01Z\\\",\\\"message\\\":\\\"9753 6178 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:36:00.500610 6178 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:36:00.500646 6178 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 12:36:00.500656 6178 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 12:36:00.500694 6178 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:36:00.500706 6178 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:36:00.500737 6178 factory.go:656] Stopping watch factory\\\\nI0126 12:36:00.500740 6178 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 12:36:00.500756 6178 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:36:00.500811 6178 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:36:00.500822 6178 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:36:00.500828 6178 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 12:36:00.500834 6178 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 12:36:00.500841 6178 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:36:00.500939 6178 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.635421 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:01Z\\\",\\\"message\\\":\\\"9753 6178 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:36:00.500610 6178 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:36:00.500646 6178 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 12:36:00.500656 6178 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 12:36:00.500694 6178 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:36:00.500706 6178 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:36:00.500737 6178 factory.go:656] Stopping watch factory\\\\nI0126 12:36:00.500740 6178 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 12:36:00.500756 6178 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:36:00.500811 6178 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:36:00.500822 6178 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:36:00.500828 6178 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 12:36:00.500834 6178 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 12:36:00.500841 6178 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:36:00.500939 6178 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.649460 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.656982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.657023 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.657032 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.657051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.657061 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:02Z","lastTransitionTime":"2026-01-26T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.668826 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.688326 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.702192 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.716019 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.724346 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: W0126 12:36:02.740137 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod939b570a_38ce_49f8_8518_1ab500c4e449.slice/crio-7a758712991eb6f84f5754e520417db8eb5b079b4844d6a250061571c6d072e8 WatchSource:0}: Error finding container 7a758712991eb6f84f5754e520417db8eb5b079b4844d6a250061571c6d072e8: Status 404 returned error can't find the container with id 7a758712991eb6f84f5754e520417db8eb5b079b4844d6a250061571c6d072e8 Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.816807 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.816857 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.816869 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.816887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.816907 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:02Z","lastTransitionTime":"2026-01-26T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.816913 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.835993 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.856099 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.875242 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.896476 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.910822 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.921361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.921410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.921424 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.921444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.921459 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:02Z","lastTransitionTime":"2026-01-26T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.925256 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.946180 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.964554 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:02 crc kubenswrapper[4881]: I0126 12:36:02.981916 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.024061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.024117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.024133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.024154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.024166 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:03Z","lastTransitionTime":"2026-01-26T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.039504 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:55:02.123121386 +0000 UTC Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.128684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.128778 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.128799 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.128830 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.128859 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:03Z","lastTransitionTime":"2026-01-26T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.207691 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5zct6"] Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.208143 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:03 crc kubenswrapper[4881]: E0126 12:36:03.208214 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.216410 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl97t\" (UniqueName: \"kubernetes.io/projected/640554c2-37e2-425f-b182-aa9b9d6fa4d8-kube-api-access-rl97t\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.216541 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.232100 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.232677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.232738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.232751 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.232773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.232789 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:03Z","lastTransitionTime":"2026-01-26T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.246234 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.260511 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.277987 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.291188 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.301903 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.314006 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.317846 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl97t\" (UniqueName: \"kubernetes.io/projected/640554c2-37e2-425f-b182-aa9b9d6fa4d8-kube-api-access-rl97t\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.317881 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:03 crc kubenswrapper[4881]: E0126 12:36:03.318002 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:03 crc kubenswrapper[4881]: E0126 12:36:03.318051 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs podName:640554c2-37e2-425f-b182-aa9b9d6fa4d8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:03.818037279 +0000 UTC m=+36.297347305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs") pod "network-metrics-daemon-5zct6" (UID: "640554c2-37e2-425f-b182-aa9b9d6fa4d8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.331511 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:01Z\\\",\\\"message\\\":\\\"9753 6178 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:36:00.500610 6178 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:36:00.500646 6178 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 12:36:00.500656 6178 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 12:36:00.500694 6178 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:36:00.500706 6178 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:36:00.500737 6178 factory.go:656] Stopping watch factory\\\\nI0126 12:36:00.500740 6178 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 12:36:00.500756 6178 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:36:00.500811 6178 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:36:00.500822 6178 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:36:00.500828 6178 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 12:36:00.500834 6178 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 12:36:00.500841 6178 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:36:00.500939 6178 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.334829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.334885 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.334898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.334920 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.334934 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:03Z","lastTransitionTime":"2026-01-26T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.339041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl97t\" (UniqueName: \"kubernetes.io/projected/640554c2-37e2-425f-b182-aa9b9d6fa4d8-kube-api-access-rl97t\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.339053 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" event={"ID":"939b570a-38ce-49f8-8518-1ab500c4e449","Type":"ContainerStarted","Data":"7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.339142 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" event={"ID":"939b570a-38ce-49f8-8518-1ab500c4e449","Type":"ContainerStarted","Data":"5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.339178 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" event={"ID":"939b570a-38ce-49f8-8518-1ab500c4e449","Type":"ContainerStarted","Data":"7a758712991eb6f84f5754e520417db8eb5b079b4844d6a250061571c6d072e8"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.341157 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/0.log" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.343438 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.345340 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.345969 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.358639 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.372745 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.383190 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.399508 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.412838 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.427111 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.437652 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.437710 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.437731 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.437763 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.437786 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:03Z","lastTransitionTime":"2026-01-26T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.442695 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.458066 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.475904 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.506450 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.530156 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.540046 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.540117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.540139 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.540175 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.540196 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:03Z","lastTransitionTime":"2026-01-26T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.549452 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.567162 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.581162 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.597166 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.611430 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.639335 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:01Z\\\",\\\"message\\\":\\\"9753 6178 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:36:00.500610 6178 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:36:00.500646 6178 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 12:36:00.500656 6178 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 12:36:00.500694 6178 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:36:00.500706 6178 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:36:00.500737 6178 factory.go:656] Stopping watch factory\\\\nI0126 12:36:00.500740 6178 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 12:36:00.500756 6178 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:36:00.500811 6178 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:36:00.500822 6178 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:36:00.500828 6178 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 12:36:00.500834 6178 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 12:36:00.500841 6178 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:36:00.500939 6178 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.643672 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.643806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.643879 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.643987 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.644051 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:03Z","lastTransitionTime":"2026-01-26T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.657766 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.683055 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.699192 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.712913 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.723019 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.738228 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.746730 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.746769 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.746783 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.746799 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.746813 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:03Z","lastTransitionTime":"2026-01-26T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.754695 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.771279 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.823275 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:03 crc kubenswrapper[4881]: E0126 12:36:03.823493 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:03 crc kubenswrapper[4881]: E0126 12:36:03.823601 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs podName:640554c2-37e2-425f-b182-aa9b9d6fa4d8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:04.82357907 +0000 UTC m=+37.302889136 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs") pod "network-metrics-daemon-5zct6" (UID: "640554c2-37e2-425f-b182-aa9b9d6fa4d8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.849917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.849954 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.849966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.849983 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.849995 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:03Z","lastTransitionTime":"2026-01-26T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.952591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.952620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.952628 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.952641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:03 crc kubenswrapper[4881]: I0126 12:36:03.952650 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:03Z","lastTransitionTime":"2026-01-26T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.039775 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:55:38.664237852 +0000 UTC Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.056236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.056279 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.056297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.056320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.056339 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:04Z","lastTransitionTime":"2026-01-26T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.081925 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:04 crc kubenswrapper[4881]: E0126 12:36:04.082149 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.082781 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.082912 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:04 crc kubenswrapper[4881]: E0126 12:36:04.083041 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:04 crc kubenswrapper[4881]: E0126 12:36:04.083151 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.160695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.160878 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.160907 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.160941 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.160976 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:04Z","lastTransitionTime":"2026-01-26T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.265140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.265649 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.265667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.265696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.265717 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:04Z","lastTransitionTime":"2026-01-26T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.368409 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.368488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.368504 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.368563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.368587 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:04Z","lastTransitionTime":"2026-01-26T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.471321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.471385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.471407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.471435 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.471462 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:04Z","lastTransitionTime":"2026-01-26T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.567689 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.573974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.574020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.574032 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.574063 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.574078 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:04Z","lastTransitionTime":"2026-01-26T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.584868 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.601422 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.631937 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:01Z\\\",\\\"message\\\":\\\"9753 6178 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:36:00.500610 6178 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:36:00.500646 6178 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 12:36:00.500656 6178 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 12:36:00.500694 6178 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:36:00.500706 6178 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:36:00.500737 6178 factory.go:656] Stopping watch factory\\\\nI0126 12:36:00.500740 6178 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 12:36:00.500756 6178 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:36:00.500811 6178 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:36:00.500822 6178 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:36:00.500828 6178 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 12:36:00.500834 6178 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 12:36:00.500841 6178 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:36:00.500939 6178 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.647896 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.666933 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.679399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.679470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.679492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.679551 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.679574 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:04Z","lastTransitionTime":"2026-01-26T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.685924 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.711596 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.728602 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.751945 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.772623 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.781462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.781556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.781583 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.781608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.781624 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:04Z","lastTransitionTime":"2026-01-26T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.788635 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.802895 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.821320 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.833605 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:04 crc kubenswrapper[4881]: E0126 12:36:04.833773 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:04 crc kubenswrapper[4881]: E0126 12:36:04.833873 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs podName:640554c2-37e2-425f-b182-aa9b9d6fa4d8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:06.833851682 +0000 UTC m=+39.313161798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs") pod "network-metrics-daemon-5zct6" (UID: "640554c2-37e2-425f-b182-aa9b9d6fa4d8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.838225 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.856986 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.870209 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.884994 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.885034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.885043 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.885061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.885074 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:04Z","lastTransitionTime":"2026-01-26T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.905082 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.988449 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.988611 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.988715 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.988799 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:04 crc kubenswrapper[4881]: I0126 12:36:04.988821 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:04Z","lastTransitionTime":"2026-01-26T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.040792 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:38:35.401296911 +0000 UTC Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.082540 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:05 crc kubenswrapper[4881]: E0126 12:36:05.082699 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.092200 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.092255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.092272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.092295 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.092312 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:05Z","lastTransitionTime":"2026-01-26T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.195192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.195243 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.195255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.195273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.195285 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:05Z","lastTransitionTime":"2026-01-26T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.298350 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.298424 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.298440 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.298460 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.298499 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:05Z","lastTransitionTime":"2026-01-26T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.359854 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/1.log" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.360710 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/0.log" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.366773 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631" exitCode=1 Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.366826 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631"} Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.366896 4881 scope.go:117] "RemoveContainer" containerID="45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.367820 4881 scope.go:117] "RemoveContainer" containerID="4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631" Jan 26 12:36:05 crc kubenswrapper[4881]: E0126 12:36:05.367973 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.389240 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.401684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.401931 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.402059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.402196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.402458 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:05Z","lastTransitionTime":"2026-01-26T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.403911 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.418317 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.442140 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.457098 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.470732 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.483618 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.495602 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.505823 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.505898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.505908 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.505927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.505941 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:05Z","lastTransitionTime":"2026-01-26T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.509214 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.525116 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.548651 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:01Z\\\",\\\"message\\\":\\\"9753 6178 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:36:00.500610 6178 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:36:00.500646 6178 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 12:36:00.500656 6178 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 12:36:00.500694 6178 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:36:00.500706 6178 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:36:00.500737 6178 factory.go:656] Stopping watch factory\\\\nI0126 12:36:00.500740 6178 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 12:36:00.500756 6178 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:36:00.500811 6178 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:36:00.500822 6178 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:36:00.500828 6178 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 12:36:00.500834 6178 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 12:36:00.500841 6178 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:36:00.500939 6178 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"message\\\":\\\"bz\\\\nI0126 12:36:03.243192 6326 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fwlbz in node crc\\\\nI0126 12:36:03.243201 6326 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fwlbz after 0 failed attempt(s)\\\\nI0126 12:36:03.243195 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:03.243228 6326 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 12:36:03.243261 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:03.243316 6326 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.562443 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.575927 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.587927 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.602147 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.608447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.608485 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.608498 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.608531 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.608541 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:05Z","lastTransitionTime":"2026-01-26T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.613568 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.636126 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:05Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.711129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.711267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.711290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.711314 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.711331 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:05Z","lastTransitionTime":"2026-01-26T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.815364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.815436 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.815448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.815479 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.815495 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:05Z","lastTransitionTime":"2026-01-26T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.919181 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.919269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.919295 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.919330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:05 crc kubenswrapper[4881]: I0126 12:36:05.919422 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:05Z","lastTransitionTime":"2026-01-26T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.021976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.022042 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.022062 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.022090 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.022113 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:06Z","lastTransitionTime":"2026-01-26T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.041632 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:27:31.992139546 +0000 UTC Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.081546 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.081608 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.081781 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:06 crc kubenswrapper[4881]: E0126 12:36:06.081895 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:06 crc kubenswrapper[4881]: E0126 12:36:06.082129 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:06 crc kubenswrapper[4881]: E0126 12:36:06.083048 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.125576 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.125716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.125739 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.125767 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.125788 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:06Z","lastTransitionTime":"2026-01-26T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.229853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.229925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.229943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.229974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.230039 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:06Z","lastTransitionTime":"2026-01-26T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.333019 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.333067 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.333083 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.333105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.333122 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:06Z","lastTransitionTime":"2026-01-26T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.374235 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/1.log" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.436993 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.437082 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.437100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.437123 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.437140 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:06Z","lastTransitionTime":"2026-01-26T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.540293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.540343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.540355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.540374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.540386 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:06Z","lastTransitionTime":"2026-01-26T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.643572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.643652 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.643673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.643700 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.643718 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:06Z","lastTransitionTime":"2026-01-26T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.746591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.746657 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.746675 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.746701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.746718 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:06Z","lastTransitionTime":"2026-01-26T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.849985 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.850057 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.850075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.850101 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.850123 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:06Z","lastTransitionTime":"2026-01-26T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.855628 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:06 crc kubenswrapper[4881]: E0126 12:36:06.855822 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:06 crc kubenswrapper[4881]: E0126 12:36:06.855915 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs podName:640554c2-37e2-425f-b182-aa9b9d6fa4d8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:10.855883931 +0000 UTC m=+43.335193987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs") pod "network-metrics-daemon-5zct6" (UID: "640554c2-37e2-425f-b182-aa9b9d6fa4d8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.953743 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.953808 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.953825 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.953850 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:06 crc kubenswrapper[4881]: I0126 12:36:06.953869 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:06Z","lastTransitionTime":"2026-01-26T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.041894 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:46:40.439862244 +0000 UTC Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.057068 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.057141 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.057166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.057194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.057218 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:07Z","lastTransitionTime":"2026-01-26T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.082372 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:07 crc kubenswrapper[4881]: E0126 12:36:07.082763 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.160052 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.160122 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.160140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.160167 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.160190 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:07Z","lastTransitionTime":"2026-01-26T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.263701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.263761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.263778 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.263801 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.263818 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:07Z","lastTransitionTime":"2026-01-26T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.366911 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.366973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.366990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.367014 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.367032 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:07Z","lastTransitionTime":"2026-01-26T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.470016 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.470080 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.470102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.470134 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.470156 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:07Z","lastTransitionTime":"2026-01-26T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.573399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.573457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.573473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.573497 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.573551 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:07Z","lastTransitionTime":"2026-01-26T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.677178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.677438 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.677571 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.677650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.677738 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:07Z","lastTransitionTime":"2026-01-26T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.781321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.781818 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.781964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.782155 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.782289 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:07Z","lastTransitionTime":"2026-01-26T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.886825 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.886884 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.886895 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.886918 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.886932 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:07Z","lastTransitionTime":"2026-01-26T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.988764 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.988813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.988824 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.988841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:07 crc kubenswrapper[4881]: I0126 12:36:07.988854 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:07Z","lastTransitionTime":"2026-01-26T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.042464 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:25:36.304372873 +0000 UTC Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.102001 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.102156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:08 crc kubenswrapper[4881]: E0126 12:36:08.102209 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:08 crc kubenswrapper[4881]: E0126 12:36:08.102343 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.102446 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:08 crc kubenswrapper[4881]: E0126 12:36:08.102597 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.104806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.104878 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.104894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.104937 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.104952 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:08Z","lastTransitionTime":"2026-01-26T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.124020 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.136391 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.153925 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.180808 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.203289 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.207593 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.207628 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.207646 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.207663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.207675 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:08Z","lastTransitionTime":"2026-01-26T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.224223 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.242147 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.258880 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.277588 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.298115 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:01Z\\\",\\\"message\\\":\\\"9753 6178 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:36:00.500610 6178 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:36:00.500646 6178 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 12:36:00.500656 6178 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 12:36:00.500694 6178 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:36:00.500706 6178 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:36:00.500737 6178 factory.go:656] Stopping watch factory\\\\nI0126 12:36:00.500740 6178 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 12:36:00.500756 6178 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:36:00.500811 6178 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:36:00.500822 6178 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:36:00.500828 6178 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 12:36:00.500834 6178 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 12:36:00.500841 6178 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:36:00.500939 6178 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"message\\\":\\\"bz\\\\nI0126 12:36:03.243192 6326 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fwlbz in node crc\\\\nI0126 12:36:03.243201 6326 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fwlbz after 0 failed attempt(s)\\\\nI0126 12:36:03.243195 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:03.243228 6326 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 12:36:03.243261 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:03.243316 6326 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.310154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.310187 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.310199 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.310219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.310231 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:08Z","lastTransitionTime":"2026-01-26T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.318751 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.334877 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.349318 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.361023 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.372071 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.393379 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.405868 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.412669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.412704 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.412718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.412733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.412744 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:08Z","lastTransitionTime":"2026-01-26T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.515968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.516028 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.516046 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.516073 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.516090 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:08Z","lastTransitionTime":"2026-01-26T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.619501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.619648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.619668 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.619699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.619721 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:08Z","lastTransitionTime":"2026-01-26T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.723197 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.723262 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.723282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.723308 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.723330 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:08Z","lastTransitionTime":"2026-01-26T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.826580 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.826656 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.826691 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.826737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.826761 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:08Z","lastTransitionTime":"2026-01-26T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.929634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.929705 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.929718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.929742 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:08 crc kubenswrapper[4881]: I0126 12:36:08.929757 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:08Z","lastTransitionTime":"2026-01-26T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.033282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.033832 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.034034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.034229 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.034358 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.043025 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:31:29.191481873 +0000 UTC Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.084070 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:09 crc kubenswrapper[4881]: E0126 12:36:09.084368 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.137507 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.137589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.137601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.137625 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.137640 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.240629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.240731 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.240756 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.240784 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.240802 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.343383 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.343418 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.343428 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.343444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.343455 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.403179 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.403656 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.403818 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.403961 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.404078 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: E0126 12:36:09.423760 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:09Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.429074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.429140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.429215 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.429242 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.429315 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: E0126 12:36:09.447188 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:09Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.452358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.452408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.452430 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.452461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.452480 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: E0126 12:36:09.469501 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:09Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.475677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.475745 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.475766 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.475847 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.475879 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: E0126 12:36:09.497869 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:09Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.503320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.503380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.503400 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.503430 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.503449 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: E0126 12:36:09.523628 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:09Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:09 crc kubenswrapper[4881]: E0126 12:36:09.524044 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.526370 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.526480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.526559 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.526626 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.526687 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.629357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.629415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.629433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.629458 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.629476 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.733585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.733655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.733673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.733700 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.733717 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.837566 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.837651 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.837692 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.837727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.837749 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.939680 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.939721 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.939734 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.939757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:09 crc kubenswrapper[4881]: I0126 12:36:09.939770 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:09Z","lastTransitionTime":"2026-01-26T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.042687 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.042764 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.042790 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.042822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.042846 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:10Z","lastTransitionTime":"2026-01-26T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.043255 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:29:45.966336521 +0000 UTC Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.081749 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.081848 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:10 crc kubenswrapper[4881]: E0126 12:36:10.081946 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.081960 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:10 crc kubenswrapper[4881]: E0126 12:36:10.082068 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:10 crc kubenswrapper[4881]: E0126 12:36:10.082191 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.146589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.146661 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.146687 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.146718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.146740 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:10Z","lastTransitionTime":"2026-01-26T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.250360 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.250433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.250473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.250509 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.250580 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:10Z","lastTransitionTime":"2026-01-26T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.353610 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.353654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.353666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.353683 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.353695 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:10Z","lastTransitionTime":"2026-01-26T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.456878 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.456940 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.456957 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.456981 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.457000 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:10Z","lastTransitionTime":"2026-01-26T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.560259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.560571 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.560792 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.560887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.560965 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:10Z","lastTransitionTime":"2026-01-26T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.664019 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.664608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.664635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.664660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.664677 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:10Z","lastTransitionTime":"2026-01-26T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.768278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.768358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.768380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.768408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.768430 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:10Z","lastTransitionTime":"2026-01-26T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.870987 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.871036 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.871045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.871059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.871068 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:10Z","lastTransitionTime":"2026-01-26T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.910178 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:10 crc kubenswrapper[4881]: E0126 12:36:10.910655 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:10 crc kubenswrapper[4881]: E0126 12:36:10.910757 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs podName:640554c2-37e2-425f-b182-aa9b9d6fa4d8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:18.910733198 +0000 UTC m=+51.390043254 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs") pod "network-metrics-daemon-5zct6" (UID: "640554c2-37e2-425f-b182-aa9b9d6fa4d8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.974278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.974359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.974383 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.974415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:10 crc kubenswrapper[4881]: I0126 12:36:10.974441 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:10Z","lastTransitionTime":"2026-01-26T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.044221 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:29:44.031211702 +0000 UTC Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.077547 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.077595 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.077609 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.077625 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.077637 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:11Z","lastTransitionTime":"2026-01-26T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.082200 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:11 crc kubenswrapper[4881]: E0126 12:36:11.082456 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.180786 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.180871 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.180893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.180923 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.180943 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:11Z","lastTransitionTime":"2026-01-26T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.284478 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.284556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.284572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.284592 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.284605 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:11Z","lastTransitionTime":"2026-01-26T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.388099 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.388182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.388193 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.388215 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.388232 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:11Z","lastTransitionTime":"2026-01-26T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.491384 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.491461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.491473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.491501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.491537 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:11Z","lastTransitionTime":"2026-01-26T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.595203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.595272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.595289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.595318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.595340 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:11Z","lastTransitionTime":"2026-01-26T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.698395 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.698462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.698476 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.698504 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.698533 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:11Z","lastTransitionTime":"2026-01-26T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.802106 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.802196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.802214 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.802245 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.802269 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:11Z","lastTransitionTime":"2026-01-26T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.905497 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.905581 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.905591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.905612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:11 crc kubenswrapper[4881]: I0126 12:36:11.905626 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:11Z","lastTransitionTime":"2026-01-26T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.008501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.008614 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.008630 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.008654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.008672 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:12Z","lastTransitionTime":"2026-01-26T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.045272 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:29:03.92770407 +0000 UTC Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.082817 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.082935 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:12 crc kubenswrapper[4881]: E0126 12:36:12.083051 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.083227 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:12 crc kubenswrapper[4881]: E0126 12:36:12.083309 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:12 crc kubenswrapper[4881]: E0126 12:36:12.083495 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.112186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.112242 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.112255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.112274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.112291 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:12Z","lastTransitionTime":"2026-01-26T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.215629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.215722 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.215739 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.215767 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.215779 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:12Z","lastTransitionTime":"2026-01-26T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.318333 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.318430 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.318456 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.318493 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.318593 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:12Z","lastTransitionTime":"2026-01-26T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.421601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.421660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.421673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.421701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.421717 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:12Z","lastTransitionTime":"2026-01-26T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.524853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.525266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.525465 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.525762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.525987 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:12Z","lastTransitionTime":"2026-01-26T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.630238 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.630335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.630365 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.630399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.630424 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:12Z","lastTransitionTime":"2026-01-26T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.733478 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.733912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.734102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.734305 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.734447 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:12Z","lastTransitionTime":"2026-01-26T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.837754 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.837815 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.837829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.837855 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.837871 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:12Z","lastTransitionTime":"2026-01-26T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.940783 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.941164 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.941261 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.941354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:12 crc kubenswrapper[4881]: I0126 12:36:12.941448 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:12Z","lastTransitionTime":"2026-01-26T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.044342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.044409 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.044427 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.044452 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.044476 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:13Z","lastTransitionTime":"2026-01-26T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.045508 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:22:54.795864075 +0000 UTC Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.081856 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:13 crc kubenswrapper[4881]: E0126 12:36:13.082085 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.147261 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.147335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.147360 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.147429 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.147459 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:13Z","lastTransitionTime":"2026-01-26T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.250573 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.250632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.250642 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.250674 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.250692 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:13Z","lastTransitionTime":"2026-01-26T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.353939 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.354077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.354110 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.354143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.354167 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:13Z","lastTransitionTime":"2026-01-26T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.457787 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.457844 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.457860 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.457881 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.457896 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:13Z","lastTransitionTime":"2026-01-26T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.561314 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.561361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.561374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.561394 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.561407 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:13Z","lastTransitionTime":"2026-01-26T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.664812 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.665164 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.665305 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.665544 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.665639 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:13Z","lastTransitionTime":"2026-01-26T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.768866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.769171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.769273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.769341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.769410 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:13Z","lastTransitionTime":"2026-01-26T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.872195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.872562 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.872757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.872898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.873024 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:13Z","lastTransitionTime":"2026-01-26T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.975773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.975854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.975874 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.975901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:13 crc kubenswrapper[4881]: I0126 12:36:13.975920 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:13Z","lastTransitionTime":"2026-01-26T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.045701 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 17:27:07.512455665 +0000 UTC Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.078751 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.078844 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.078868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.078907 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.078934 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:14Z","lastTransitionTime":"2026-01-26T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.082156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.082215 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.082263 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:14 crc kubenswrapper[4881]: E0126 12:36:14.082375 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:14 crc kubenswrapper[4881]: E0126 12:36:14.082611 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:14 crc kubenswrapper[4881]: E0126 12:36:14.082836 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.182085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.182169 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.182193 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.182228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.182248 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:14Z","lastTransitionTime":"2026-01-26T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.285613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.285669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.285687 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.285712 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.285731 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:14Z","lastTransitionTime":"2026-01-26T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.389168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.389349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.389378 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.389412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.389435 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:14Z","lastTransitionTime":"2026-01-26T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.492092 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.492147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.492166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.492190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.492209 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:14Z","lastTransitionTime":"2026-01-26T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.594746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.595119 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.595271 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.595437 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.595616 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:14Z","lastTransitionTime":"2026-01-26T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.699469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.699889 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.700053 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.700282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.700460 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:14Z","lastTransitionTime":"2026-01-26T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.803696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.803774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.803826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.803849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.803865 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:14Z","lastTransitionTime":"2026-01-26T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.907386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.907913 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.908121 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.908419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:14 crc kubenswrapper[4881]: I0126 12:36:14.908763 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:14Z","lastTransitionTime":"2026-01-26T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.012170 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.012236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.012259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.012289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.012311 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:15Z","lastTransitionTime":"2026-01-26T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.046250 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:07:27.45022769 +0000 UTC Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.081854 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:15 crc kubenswrapper[4881]: E0126 12:36:15.082065 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.116354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.116410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.116427 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.116453 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.116470 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:15Z","lastTransitionTime":"2026-01-26T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.220001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.220135 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.220155 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.220180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.220198 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:15Z","lastTransitionTime":"2026-01-26T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.325034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.325096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.325125 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.325176 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.325201 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:15Z","lastTransitionTime":"2026-01-26T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.426872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.426920 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.426931 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.426945 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.426954 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:15Z","lastTransitionTime":"2026-01-26T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.530059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.530104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.530118 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.530137 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.530154 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:15Z","lastTransitionTime":"2026-01-26T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.632791 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.632830 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.632840 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.632862 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.632877 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:15Z","lastTransitionTime":"2026-01-26T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.736753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.736791 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.736800 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.736814 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.736825 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:15Z","lastTransitionTime":"2026-01-26T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.839851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.840010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.840036 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.840065 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.840092 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:15Z","lastTransitionTime":"2026-01-26T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.944248 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.944320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.944334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.944352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:15 crc kubenswrapper[4881]: I0126 12:36:15.944373 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:15Z","lastTransitionTime":"2026-01-26T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.047068 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 04:52:50.653900984 +0000 UTC Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.047701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.047768 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.047792 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.047820 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.047843 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:16Z","lastTransitionTime":"2026-01-26T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.081725 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.081766 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:16 crc kubenswrapper[4881]: E0126 12:36:16.081946 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.082026 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:16 crc kubenswrapper[4881]: E0126 12:36:16.082184 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:16 crc kubenswrapper[4881]: E0126 12:36:16.082513 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.151118 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.151220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.151282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.151307 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.151364 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:16Z","lastTransitionTime":"2026-01-26T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.254608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.254673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.254690 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.254713 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.254730 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:16Z","lastTransitionTime":"2026-01-26T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.358169 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.358227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.358246 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.358268 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.358285 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:16Z","lastTransitionTime":"2026-01-26T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.461731 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.461777 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.461792 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.461812 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.461828 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:16Z","lastTransitionTime":"2026-01-26T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.565186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.565217 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.565227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.565244 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.565255 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:16Z","lastTransitionTime":"2026-01-26T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.668237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.668321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.668341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.668372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.668393 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:16Z","lastTransitionTime":"2026-01-26T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.780058 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.780099 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.780110 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.780125 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.780135 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:16Z","lastTransitionTime":"2026-01-26T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.882844 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.882913 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.882928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.882946 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.882980 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:16Z","lastTransitionTime":"2026-01-26T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.985336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.985397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.985406 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.985422 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:16 crc kubenswrapper[4881]: I0126 12:36:16.985433 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:16Z","lastTransitionTime":"2026-01-26T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.047841 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:30:22.72811366 +0000 UTC Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.082233 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.082537 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.088440 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.088486 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.088499 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.088542 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.088557 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:17Z","lastTransitionTime":"2026-01-26T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.191186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.191259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.191276 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.191299 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.191317 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:17Z","lastTransitionTime":"2026-01-26T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.293774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.293829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.293845 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.293867 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.293885 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:17Z","lastTransitionTime":"2026-01-26T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.396965 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.397061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.397084 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.397162 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.397190 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:17Z","lastTransitionTime":"2026-01-26T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.500792 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.500859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.500881 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.500908 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.500926 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:17Z","lastTransitionTime":"2026-01-26T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.604503 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.604609 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.604628 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.604652 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.604671 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:17Z","lastTransitionTime":"2026-01-26T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.706898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.706945 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.706958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.706976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.706989 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:17Z","lastTransitionTime":"2026-01-26T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.809802 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.809848 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.809859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.809875 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.809889 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:17Z","lastTransitionTime":"2026-01-26T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.891213 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.891342 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.891372 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.891430 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:36:49.89139798 +0000 UTC m=+82.370708046 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.891504 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.891486 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.891557 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.891568 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.891588 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.891593 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.891666 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.891609 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:49.891596345 +0000 UTC m=+82.370906371 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.891763 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:49.891748348 +0000 UTC m=+82.371058474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.891779 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:49.891771599 +0000 UTC m=+82.371081745 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.891847 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.892572 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.892585 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:17 crc kubenswrapper[4881]: E0126 12:36:17.892619 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:49.892610829 +0000 UTC m=+82.371920855 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.912302 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.912560 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.912664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.912741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:17 crc kubenswrapper[4881]: I0126 12:36:17.912814 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:17Z","lastTransitionTime":"2026-01-26T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.015635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.015700 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.015717 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.015740 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.015755 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:18Z","lastTransitionTime":"2026-01-26T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.048129 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:59:12.490738748 +0000 UTC Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.082085 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.082185 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:18 crc kubenswrapper[4881]: E0126 12:36:18.082266 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.082344 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:18 crc kubenswrapper[4881]: E0126 12:36:18.082447 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:18 crc kubenswrapper[4881]: E0126 12:36:18.082614 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.097944 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.112923 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.120826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.120899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.120911 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.121277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.121310 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:18Z","lastTransitionTime":"2026-01-26T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.133775 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.163238 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.184689 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.207876 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.225849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.225906 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.225919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.225938 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.225950 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:18Z","lastTransitionTime":"2026-01-26T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.227126 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.244743 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.266131 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.298930 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e8fd54bf831d00321a165260c412e8d0d9aa9f9b49c15dbce69bd7e35e8e17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:01Z\\\",\\\"message\\\":\\\"9753 6178 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:36:00.500610 6178 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:36:00.500646 6178 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 12:36:00.500656 6178 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 12:36:00.500694 6178 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:36:00.500706 6178 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:36:00.500737 6178 factory.go:656] Stopping watch factory\\\\nI0126 12:36:00.500740 6178 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 12:36:00.500756 6178 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:36:00.500811 6178 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:36:00.500822 6178 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:36:00.500828 6178 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 12:36:00.500834 6178 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 12:36:00.500841 6178 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:36:00.500939 6178 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"message\\\":\\\"bz\\\\nI0126 12:36:03.243192 6326 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fwlbz in node crc\\\\nI0126 12:36:03.243201 6326 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fwlbz after 0 failed attempt(s)\\\\nI0126 12:36:03.243195 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:03.243228 6326 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 12:36:03.243261 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:03.243316 6326 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.316313 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.328489 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.328562 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.328575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.328597 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.328611 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:18Z","lastTransitionTime":"2026-01-26T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.335506 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.354551 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.376149 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.392987 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.412396 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.427478 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.431964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.432009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.432022 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.432044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.432058 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:18Z","lastTransitionTime":"2026-01-26T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.534556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.535777 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.535843 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.535873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.535894 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:18Z","lastTransitionTime":"2026-01-26T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.638461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.638559 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.638574 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.638594 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.638609 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:18Z","lastTransitionTime":"2026-01-26T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.741755 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.741816 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.741833 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.741857 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.741874 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:18Z","lastTransitionTime":"2026-01-26T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.844777 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.844841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.844859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.844883 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.844900 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:18Z","lastTransitionTime":"2026-01-26T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.947590 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.947635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.947647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.947663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:18 crc kubenswrapper[4881]: I0126 12:36:18.947685 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:18Z","lastTransitionTime":"2026-01-26T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.006973 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:19 crc kubenswrapper[4881]: E0126 12:36:19.007211 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:19 crc kubenswrapper[4881]: E0126 12:36:19.007291 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs podName:640554c2-37e2-425f-b182-aa9b9d6fa4d8 nodeName:}" failed. No retries permitted until 2026-01-26 12:36:35.007269369 +0000 UTC m=+67.486579425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs") pod "network-metrics-daemon-5zct6" (UID: "640554c2-37e2-425f-b182-aa9b9d6fa4d8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.048543 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:33:43.783175697 +0000 UTC Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.050705 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.050760 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.050776 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.050799 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.050818 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.081837 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:19 crc kubenswrapper[4881]: E0126 12:36:19.082082 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.083009 4881 scope.go:117] "RemoveContainer" containerID="4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.097265 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.116600 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.131558 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.149060 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.154322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.154426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.154448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.154508 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.154586 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.173078 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.193242 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.210943 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.233889 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.257955 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.258011 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.258027 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.258055 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.258078 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.259785 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.280317 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.303366 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.331404 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.347838 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.360420 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.360459 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.360470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.360486 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.360496 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.364590 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.385800 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"message\\\":\\\"bz\\\\nI0126 12:36:03.243192 6326 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fwlbz in node crc\\\\nI0126 12:36:03.243201 6326 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fwlbz after 0 failed attempt(s)\\\\nI0126 12:36:03.243195 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:03.243228 6326 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 12:36:03.243261 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:03.243316 6326 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.399158 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.413804 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.429722 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/1.log" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.435381 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13"} Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.436798 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.462693 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.462740 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.462758 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.462785 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.462801 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.467936 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.492419 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.512569 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.530555 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.555155 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.577912 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.593139 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.593177 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.593188 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.593205 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.593218 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.600459 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.625198 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"message\\\":\\\"bz\\\\nI0126 12:36:03.243192 6326 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fwlbz in node crc\\\\nI0126 12:36:03.243201 6326 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fwlbz after 0 failed attempt(s)\\\\nI0126 12:36:03.243195 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:03.243228 6326 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 12:36:03.243261 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:03.243316 6326 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.635738 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.655689 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.669290 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.681428 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.695365 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.695395 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.695404 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.695417 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.695426 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.706780 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.721475 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.733214 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.744772 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.754772 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.764841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.764875 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.764885 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.764899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.764908 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: E0126 12:36:19.776116 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.779423 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.779451 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.779459 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.779472 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.779480 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: E0126 12:36:19.796348 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.800640 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.800669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.800678 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.800692 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.800702 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: E0126 12:36:19.811241 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.814979 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.815005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.815016 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.815031 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.815042 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: E0126 12:36:19.828123 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.831378 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.831432 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.831448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.831472 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.831489 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: E0126 12:36:19.845222 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:19 crc kubenswrapper[4881]: E0126 12:36:19.845469 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.846966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.847006 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.847023 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.847045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.847062 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.949865 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.949910 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.949930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.949959 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:19 crc kubenswrapper[4881]: I0126 12:36:19.949982 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:19Z","lastTransitionTime":"2026-01-26T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.048974 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:03:48.135077979 +0000 UTC Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.053113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.053180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.053225 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.053251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.053269 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:20Z","lastTransitionTime":"2026-01-26T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.081850 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.081910 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.081991 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:20 crc kubenswrapper[4881]: E0126 12:36:20.082192 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:20 crc kubenswrapper[4881]: E0126 12:36:20.082310 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:20 crc kubenswrapper[4881]: E0126 12:36:20.082425 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.156227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.156659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.156888 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.157032 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.157214 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:20Z","lastTransitionTime":"2026-01-26T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.260422 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.260822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.260981 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.261153 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.261328 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:20Z","lastTransitionTime":"2026-01-26T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.364343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.364420 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.364442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.364473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.364496 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:20Z","lastTransitionTime":"2026-01-26T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.440449 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/2.log" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.441165 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/1.log" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.443708 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13" exitCode=1 Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.443751 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.443784 4881 scope.go:117] "RemoveContainer" containerID="4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.444486 4881 scope.go:117] "RemoveContainer" containerID="e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13" Jan 26 12:36:20 crc kubenswrapper[4881]: E0126 12:36:20.444657 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.462828 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.467315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.467357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.467371 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.467388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.467400 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:20Z","lastTransitionTime":"2026-01-26T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.479160 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.499593 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.511767 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.529319 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.545820 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.570930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.571017 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.571040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.571073 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.571096 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:20Z","lastTransitionTime":"2026-01-26T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.575506 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.589957 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.605728 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.622101 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.637715 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.654430 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.673928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.673985 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.674004 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.674028 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.674046 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:20Z","lastTransitionTime":"2026-01-26T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.685697 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.700756 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.716921 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.746154 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"message\\\":\\\"bz\\\\nI0126 12:36:03.243192 6326 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fwlbz in node crc\\\\nI0126 12:36:03.243201 6326 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fwlbz after 0 failed attempt(s)\\\\nI0126 12:36:03.243195 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:03.243228 6326 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 12:36:03.243261 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:03.243316 6326 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:20Z\\\",\\\"message\\\":\\\"lace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:20.107557 6521 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:20.104835 6521 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.766290 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.776752 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.776812 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.776829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.776852 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.776868 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:20Z","lastTransitionTime":"2026-01-26T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.880226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.880273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.880286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.880304 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.880316 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:20Z","lastTransitionTime":"2026-01-26T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.983988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.984041 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.984058 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.984081 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:20 crc kubenswrapper[4881]: I0126 12:36:20.984098 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:20Z","lastTransitionTime":"2026-01-26T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.049824 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:59:28.8702177 +0000 UTC Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.082356 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:21 crc kubenswrapper[4881]: E0126 12:36:21.082552 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.086192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.086271 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.086288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.086319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.086341 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:21Z","lastTransitionTime":"2026-01-26T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.188643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.188689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.188706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.188728 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.188745 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:21Z","lastTransitionTime":"2026-01-26T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.292759 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.292843 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.292861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.292885 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.292903 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:21Z","lastTransitionTime":"2026-01-26T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.396283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.396342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.396366 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.396394 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.396412 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:21Z","lastTransitionTime":"2026-01-26T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.421193 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.435162 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.444041 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.448950 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/2.log" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.453956 4881 scope.go:117] "RemoveContainer" containerID="e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13" Jan 26 12:36:21 crc kubenswrapper[4881]: E0126 12:36:21.454287 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.461111 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.476058 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.495939 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.499200 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.499251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.499269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.499293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.499310 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:21Z","lastTransitionTime":"2026-01-26T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.510677 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.526181 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.542232 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.557824 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.583674 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.602146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.602233 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.602253 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.602285 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.602307 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:21Z","lastTransitionTime":"2026-01-26T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.603360 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.619437 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.639397 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.657384 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.673802 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.695672 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.704963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.705121 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.705152 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.705183 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.705204 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:21Z","lastTransitionTime":"2026-01-26T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.729128 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e23aeae9c9b1d09117c59f533d4c28521dd8f1685ee3b4547bb751fdb7c8631\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"message\\\":\\\"bz\\\\nI0126 12:36:03.243192 6326 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fwlbz in node crc\\\\nI0126 12:36:03.243201 6326 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fwlbz after 0 failed attempt(s)\\\\nI0126 12:36:03.243195 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:03.243228 6326 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 12:36:03.243261 6326 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:03.243316 6326 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:20Z\\\",\\\"message\\\":\\\"lace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:20.107557 6521 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:20.104835 6521 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.745595 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.764975 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.792236 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.808886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.808941 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.808954 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.808973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.808986 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:21Z","lastTransitionTime":"2026-01-26T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.813729 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.835859 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.856220 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.873577 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.894291 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caa549f9-799c-43a9-a2a5-315713594704\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3b1592342c8b506c26109958ea6b4770bc41e6dde0be57603730427216aec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a599fa1a831e0d46ab1af51dc83b3c5ef566caa4d05629e344102fc4d852af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa21f92dbc9916c1d8eeb219803f791ad323f1cf82e7cb6fa64fe467b585c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.912113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.912168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.912185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.912210 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.912226 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:21Z","lastTransitionTime":"2026-01-26T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.912253 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.929663 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.958511 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:20Z\\\",\\\"message\\\":\\\"lace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:20.107557 6521 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:20.104835 6521 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.973911 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:21 crc kubenswrapper[4881]: I0126 12:36:21.989491 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:21Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.008962 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.015039 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.015103 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.015115 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.015134 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.015148 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:22Z","lastTransitionTime":"2026-01-26T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.026768 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.044784 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.050534 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:51:21.960525586 +0000 UTC Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.066544 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.082165 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.082191 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:22 crc kubenswrapper[4881]: E0126 12:36:22.082312 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:22 crc kubenswrapper[4881]: E0126 12:36:22.082380 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.082276 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:22 crc kubenswrapper[4881]: E0126 12:36:22.082463 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.087049 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.105041 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.118386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.118434 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.118448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.118466 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.118478 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:22Z","lastTransitionTime":"2026-01-26T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.220910 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.220973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.220986 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.221005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.221019 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:22Z","lastTransitionTime":"2026-01-26T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.324071 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.324116 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.324128 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.324146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.324159 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:22Z","lastTransitionTime":"2026-01-26T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.427569 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.427632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.427649 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.427673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.427690 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:22Z","lastTransitionTime":"2026-01-26T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.530511 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.530567 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.530579 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.530596 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.530608 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:22Z","lastTransitionTime":"2026-01-26T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.633067 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.633131 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.633149 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.633177 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.633199 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:22Z","lastTransitionTime":"2026-01-26T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.736073 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.736123 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.736138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.736157 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.736170 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:22Z","lastTransitionTime":"2026-01-26T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.838686 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.838807 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.838835 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.838869 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.838890 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:22Z","lastTransitionTime":"2026-01-26T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.941824 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.942167 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.942191 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.942220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:22 crc kubenswrapper[4881]: I0126 12:36:22.942244 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:22Z","lastTransitionTime":"2026-01-26T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.045619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.045674 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.045690 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.045713 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.045733 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:23Z","lastTransitionTime":"2026-01-26T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.051135 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:45:46.60768492 +0000 UTC Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.081597 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:23 crc kubenswrapper[4881]: E0126 12:36:23.081750 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.148344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.148397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.148411 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.148432 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.148448 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:23Z","lastTransitionTime":"2026-01-26T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.251807 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.251856 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.251868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.251888 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.251902 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:23Z","lastTransitionTime":"2026-01-26T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.354936 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.354990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.355000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.355027 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.355040 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:23Z","lastTransitionTime":"2026-01-26T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.457240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.457316 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.457350 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.457386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.457406 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:23Z","lastTransitionTime":"2026-01-26T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.560174 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.560256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.560291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.560321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.560347 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:23Z","lastTransitionTime":"2026-01-26T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.663573 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.663638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.663658 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.663686 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.663705 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:23Z","lastTransitionTime":"2026-01-26T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.767377 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.767446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.767462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.767483 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.767498 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:23Z","lastTransitionTime":"2026-01-26T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.871133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.871193 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.871212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.871237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.871255 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:23Z","lastTransitionTime":"2026-01-26T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.974503 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.974597 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.974618 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.974642 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:23 crc kubenswrapper[4881]: I0126 12:36:23.974659 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:23Z","lastTransitionTime":"2026-01-26T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.051567 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:56:19.303686436 +0000 UTC Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.077507 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.077636 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.077701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.077737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.077755 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:24Z","lastTransitionTime":"2026-01-26T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.082184 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.082230 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:24 crc kubenswrapper[4881]: E0126 12:36:24.082337 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:24 crc kubenswrapper[4881]: E0126 12:36:24.082592 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.082825 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:24 crc kubenswrapper[4881]: E0126 12:36:24.082976 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.181315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.181374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.181390 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.181413 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.181430 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:24Z","lastTransitionTime":"2026-01-26T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.283788 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.283882 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.283901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.283925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.283942 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:24Z","lastTransitionTime":"2026-01-26T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.387626 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.387680 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.387696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.387721 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.387739 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:24Z","lastTransitionTime":"2026-01-26T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.490923 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.490974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.490988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.491008 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.491021 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:24Z","lastTransitionTime":"2026-01-26T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.593650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.593718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.593726 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.593741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.593751 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:24Z","lastTransitionTime":"2026-01-26T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.697277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.697332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.697355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.697385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.697408 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:24Z","lastTransitionTime":"2026-01-26T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.800236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.800293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.800306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.800327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.800339 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:24Z","lastTransitionTime":"2026-01-26T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.903229 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.903296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.903314 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.903338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:24 crc kubenswrapper[4881]: I0126 12:36:24.903355 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:24Z","lastTransitionTime":"2026-01-26T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.006054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.006112 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.006130 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.006156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.006172 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:25Z","lastTransitionTime":"2026-01-26T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.052267 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:20:24.743010506 +0000 UTC Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.082151 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:25 crc kubenswrapper[4881]: E0126 12:36:25.082346 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.109184 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.109251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.109267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.109292 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.109309 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:25Z","lastTransitionTime":"2026-01-26T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.212097 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.212158 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.212181 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.212207 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.212231 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:25Z","lastTransitionTime":"2026-01-26T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.314770 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.314818 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.314828 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.314846 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.314858 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:25Z","lastTransitionTime":"2026-01-26T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.417512 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.417603 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.417620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.417642 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.417661 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:25Z","lastTransitionTime":"2026-01-26T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.520866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.520902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.520914 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.520930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.520942 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:25Z","lastTransitionTime":"2026-01-26T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.623372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.623440 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.623452 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.623468 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.623479 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:25Z","lastTransitionTime":"2026-01-26T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.726700 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.727240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.727341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.727485 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.727657 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:25Z","lastTransitionTime":"2026-01-26T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.830781 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.830833 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.830844 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.830861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.830874 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:25Z","lastTransitionTime":"2026-01-26T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.933963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.934010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.934025 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.934046 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:25 crc kubenswrapper[4881]: I0126 12:36:25.934059 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:25Z","lastTransitionTime":"2026-01-26T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.037002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.037058 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.037076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.037099 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.037116 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:26Z","lastTransitionTime":"2026-01-26T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.053494 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:14:30.099675768 +0000 UTC Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.082732 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.082744 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:26 crc kubenswrapper[4881]: E0126 12:36:26.082934 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.082978 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:26 crc kubenswrapper[4881]: E0126 12:36:26.083091 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:26 crc kubenswrapper[4881]: E0126 12:36:26.083224 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.140448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.140487 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.140503 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.140568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.140587 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:26Z","lastTransitionTime":"2026-01-26T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.244071 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.244120 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.244133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.244149 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.244161 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:26Z","lastTransitionTime":"2026-01-26T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.346555 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.346589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.346600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.346616 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.346630 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:26Z","lastTransitionTime":"2026-01-26T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.450639 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.450710 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.450730 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.450757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.450777 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:26Z","lastTransitionTime":"2026-01-26T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.552328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.552364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.552375 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.552392 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.552403 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:26Z","lastTransitionTime":"2026-01-26T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.654896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.654927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.654936 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.654956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.654966 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:26Z","lastTransitionTime":"2026-01-26T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.757144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.757213 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.757238 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.757269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.757290 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:26Z","lastTransitionTime":"2026-01-26T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.859399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.859448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.859461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.859498 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.859578 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:26Z","lastTransitionTime":"2026-01-26T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.962399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.962476 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.962500 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.962561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:26 crc kubenswrapper[4881]: I0126 12:36:26.962584 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:26Z","lastTransitionTime":"2026-01-26T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.054184 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:17:45.521421234 +0000 UTC Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.065797 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.065880 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.065897 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.065924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.065938 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:27Z","lastTransitionTime":"2026-01-26T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.082281 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:27 crc kubenswrapper[4881]: E0126 12:36:27.082550 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.169090 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.169151 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.169175 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.169207 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.169230 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:27Z","lastTransitionTime":"2026-01-26T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.273334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.273415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.273434 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.273462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.273609 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:27Z","lastTransitionTime":"2026-01-26T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.376879 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.376915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.376927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.376945 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.376957 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:27Z","lastTransitionTime":"2026-01-26T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.482620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.482727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.482747 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.482808 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.482860 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:27Z","lastTransitionTime":"2026-01-26T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.585863 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.585905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.585914 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.585930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.585941 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:27Z","lastTransitionTime":"2026-01-26T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.689740 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.689797 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.689813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.689828 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.689837 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:27Z","lastTransitionTime":"2026-01-26T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.793391 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.793466 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.793497 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.793560 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.793583 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:27Z","lastTransitionTime":"2026-01-26T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.896320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.896362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.896374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.896390 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.896401 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:27Z","lastTransitionTime":"2026-01-26T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.998876 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.998922 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.998934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.998956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:27 crc kubenswrapper[4881]: I0126 12:36:27.998970 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:27Z","lastTransitionTime":"2026-01-26T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.055360 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:07:51.437817952 +0000 UTC Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.081950 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.081969 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:28 crc kubenswrapper[4881]: E0126 12:36:28.082067 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.082111 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:28 crc kubenswrapper[4881]: E0126 12:36:28.082250 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:28 crc kubenswrapper[4881]: E0126 12:36:28.082348 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.093931 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.103550 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.103607 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.103619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.103637 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.103651 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:28Z","lastTransitionTime":"2026-01-26T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.108662 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.159747 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.177221 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.192990 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.205923 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.205958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.205967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.205982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.205992 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:28Z","lastTransitionTime":"2026-01-26T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.207438 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.230350 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.265687 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.292630 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.309132 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.309173 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.309182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.309200 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.309211 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:28Z","lastTransitionTime":"2026-01-26T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.323398 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:20Z\\\",\\\"message\\\":\\\"lace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:20.107557 6521 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:20.104835 6521 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.336948 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.351233 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caa549f9-799c-43a9-a2a5-315713594704\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3b1592342c8b506c26109958ea6b4770bc41e6dde0be57603730427216aec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a599fa1a831e0d46ab1af51dc83b3c5ef566caa4d05629e344102fc4d852af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa21f92dbc9916c1d8eeb219803f791ad323f1cf82e7cb6fa64fe467b585c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.365174 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.375606 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.391881 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.403446 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.412345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.412396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.412408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.412436 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.412449 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:28Z","lastTransitionTime":"2026-01-26T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.418494 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.439089 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.515051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.515159 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.515177 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.515192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.515209 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:28Z","lastTransitionTime":"2026-01-26T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.618356 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.618440 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.618561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.618599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.618620 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:28Z","lastTransitionTime":"2026-01-26T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.723505 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.723584 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.723599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.723621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.723640 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:28Z","lastTransitionTime":"2026-01-26T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.826614 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.826682 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.826696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.826722 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.826741 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:28Z","lastTransitionTime":"2026-01-26T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.929565 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.929644 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.929666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.929688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:28 crc kubenswrapper[4881]: I0126 12:36:28.929705 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:28Z","lastTransitionTime":"2026-01-26T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.033037 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.033096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.033105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.033128 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.033142 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:29Z","lastTransitionTime":"2026-01-26T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.056380 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:21:40.4415182 +0000 UTC Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.082132 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:29 crc kubenswrapper[4881]: E0126 12:36:29.082382 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.136708 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.136765 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.136778 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.136795 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.136806 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:29Z","lastTransitionTime":"2026-01-26T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.238961 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.239012 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.239031 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.239056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.239075 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:29Z","lastTransitionTime":"2026-01-26T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.341667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.341746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.341770 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.341797 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.341816 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:29Z","lastTransitionTime":"2026-01-26T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.445303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.445364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.445386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.445412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.445430 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:29Z","lastTransitionTime":"2026-01-26T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.548442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.548510 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.548558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.548582 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.548600 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:29Z","lastTransitionTime":"2026-01-26T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.651465 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.651509 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.651546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.651567 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.651578 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:29Z","lastTransitionTime":"2026-01-26T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.754003 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.754039 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.754048 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.754061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.754072 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:29Z","lastTransitionTime":"2026-01-26T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.856970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.857042 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.857054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.857072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.857083 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:29Z","lastTransitionTime":"2026-01-26T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.959180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.959219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.959228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.959267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:29 crc kubenswrapper[4881]: I0126 12:36:29.959282 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:29Z","lastTransitionTime":"2026-01-26T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.056904 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:08:48.431907865 +0000 UTC Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.058074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.058275 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.058420 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.058595 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.058744 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: E0126 12:36:30.073807 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:30Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.079448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.079493 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.079505 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.079534 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.079544 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.081804 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:30 crc kubenswrapper[4881]: E0126 12:36:30.082178 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.081874 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.081816 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:30 crc kubenswrapper[4881]: E0126 12:36:30.082900 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:30 crc kubenswrapper[4881]: E0126 12:36:30.083077 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:30 crc kubenswrapper[4881]: E0126 12:36:30.098268 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:30Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.101838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.101897 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.101920 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.101950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.102014 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: E0126 12:36:30.123057 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:30Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.128223 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.128296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.128324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.128352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.128374 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: E0126 12:36:30.150780 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:30Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.156558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.156796 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.156955 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.157138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.157341 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: E0126 12:36:30.177173 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:30Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:30 crc kubenswrapper[4881]: E0126 12:36:30.177380 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.179593 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.179646 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.179669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.179698 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.179723 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.283421 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.283482 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.283505 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.283579 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.283602 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.387000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.387040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.387052 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.387068 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.387079 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.490031 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.490087 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.490098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.490121 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.490136 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.593324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.593713 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.593901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.594057 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.594207 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.697313 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.697357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.697366 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.697382 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.697392 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.801031 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.801086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.801104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.801126 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.801144 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.907837 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.908374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.909072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.909182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:30 crc kubenswrapper[4881]: I0126 12:36:30.909258 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:30Z","lastTransitionTime":"2026-01-26T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.011818 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.011902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.011925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.011958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.011980 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:31Z","lastTransitionTime":"2026-01-26T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.057575 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:03:09.314916 +0000 UTC Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.081811 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:31 crc kubenswrapper[4881]: E0126 12:36:31.081937 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.114933 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.115021 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.115046 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.115075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.115096 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:31Z","lastTransitionTime":"2026-01-26T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.218200 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.218241 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.218252 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.218267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.218279 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:31Z","lastTransitionTime":"2026-01-26T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.320604 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.320644 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.320653 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.320668 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.320678 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:31Z","lastTransitionTime":"2026-01-26T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.427249 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.427309 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.427331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.427359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.427382 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:31Z","lastTransitionTime":"2026-01-26T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.530483 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.530570 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.530588 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.530614 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.530630 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:31Z","lastTransitionTime":"2026-01-26T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.633208 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.633280 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.633302 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.633332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.633355 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:31Z","lastTransitionTime":"2026-01-26T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.736170 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.736231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.736252 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.736277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.736295 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:31Z","lastTransitionTime":"2026-01-26T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.838640 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.838704 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.838729 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.838748 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.838759 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:31Z","lastTransitionTime":"2026-01-26T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.941350 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.941427 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.941444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.941469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:31 crc kubenswrapper[4881]: I0126 12:36:31.941483 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:31Z","lastTransitionTime":"2026-01-26T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.043831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.043871 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.043880 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.043894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.043905 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:32Z","lastTransitionTime":"2026-01-26T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.058401 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:51:50.680499332 +0000 UTC Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.081948 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.082029 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.082082 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:32 crc kubenswrapper[4881]: E0126 12:36:32.082154 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:32 crc kubenswrapper[4881]: E0126 12:36:32.082445 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:32 crc kubenswrapper[4881]: E0126 12:36:32.082669 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.146384 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.146427 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.146439 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.146456 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.146469 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:32Z","lastTransitionTime":"2026-01-26T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.249989 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.250054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.250076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.250106 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.250128 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:32Z","lastTransitionTime":"2026-01-26T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.353086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.353120 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.353130 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.353143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.353153 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:32Z","lastTransitionTime":"2026-01-26T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.455812 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.455859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.455872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.455891 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.455903 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:32Z","lastTransitionTime":"2026-01-26T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.558329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.558414 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.558424 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.558440 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.558452 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:32Z","lastTransitionTime":"2026-01-26T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.660914 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.660949 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.660958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.660972 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.660981 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:32Z","lastTransitionTime":"2026-01-26T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.763928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.764194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.764264 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.764410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.764475 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:32Z","lastTransitionTime":"2026-01-26T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.866273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.866312 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.866320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.866334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.866344 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:32Z","lastTransitionTime":"2026-01-26T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.969001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.969555 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.969654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.969722 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:32 crc kubenswrapper[4881]: I0126 12:36:32.969807 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:32Z","lastTransitionTime":"2026-01-26T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.059050 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:35:14.140729793 +0000 UTC Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.072130 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.072256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.072321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.072395 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.072459 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:33Z","lastTransitionTime":"2026-01-26T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.081391 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:33 crc kubenswrapper[4881]: E0126 12:36:33.081584 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.174986 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.175010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.175018 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.175032 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.175040 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:33Z","lastTransitionTime":"2026-01-26T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.277297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.277359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.277381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.277412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.277434 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:33Z","lastTransitionTime":"2026-01-26T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.380471 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.380564 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.380586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.380613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.380630 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:33Z","lastTransitionTime":"2026-01-26T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.483125 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.483438 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.483576 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.483687 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.483776 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:33Z","lastTransitionTime":"2026-01-26T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.586278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.586328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.586342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.586360 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.586371 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:33Z","lastTransitionTime":"2026-01-26T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.689467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.689601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.689628 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.689661 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.689684 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:33Z","lastTransitionTime":"2026-01-26T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.792557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.792605 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.792619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.792639 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.792651 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:33Z","lastTransitionTime":"2026-01-26T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.894985 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.895027 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.895035 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.895051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.895061 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:33Z","lastTransitionTime":"2026-01-26T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.997540 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.997578 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.997591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.997606 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:33 crc kubenswrapper[4881]: I0126 12:36:33.997616 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:33Z","lastTransitionTime":"2026-01-26T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.059899 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:52:22.956941568 +0000 UTC Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.081556 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.081632 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.081652 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:34 crc kubenswrapper[4881]: E0126 12:36:34.081719 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:34 crc kubenswrapper[4881]: E0126 12:36:34.081818 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:34 crc kubenswrapper[4881]: E0126 12:36:34.081926 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.100442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.100491 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.100509 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.100557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.100573 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:34Z","lastTransitionTime":"2026-01-26T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.202909 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.202973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.202991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.203015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.203034 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:34Z","lastTransitionTime":"2026-01-26T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.306500 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.306625 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.306648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.306680 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.306702 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:34Z","lastTransitionTime":"2026-01-26T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.410213 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.410272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.410289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.410313 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.410328 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:34Z","lastTransitionTime":"2026-01-26T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.513757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.513842 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.513862 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.513889 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.513908 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:34Z","lastTransitionTime":"2026-01-26T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.616207 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.616257 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.616269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.616288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.616301 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:34Z","lastTransitionTime":"2026-01-26T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.718997 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.719050 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.719067 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.719091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.719107 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:34Z","lastTransitionTime":"2026-01-26T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.821968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.822034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.822051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.822077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.822095 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:34Z","lastTransitionTime":"2026-01-26T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.924323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.924371 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.924382 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.924398 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:34 crc kubenswrapper[4881]: I0126 12:36:34.924428 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:34Z","lastTransitionTime":"2026-01-26T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.026682 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.026717 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.026724 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.026737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.026746 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:35Z","lastTransitionTime":"2026-01-26T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:35 crc kubenswrapper[4881]: E0126 12:36:35.027532 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:35 crc kubenswrapper[4881]: E0126 12:36:35.027589 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs podName:640554c2-37e2-425f-b182-aa9b9d6fa4d8 nodeName:}" failed. No retries permitted until 2026-01-26 12:37:07.027572071 +0000 UTC m=+99.506882097 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs") pod "network-metrics-daemon-5zct6" (UID: "640554c2-37e2-425f-b182-aa9b9d6fa4d8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.027495 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.060531 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 12:15:30.405428134 +0000 UTC Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.081887 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:35 crc kubenswrapper[4881]: E0126 12:36:35.082202 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.083328 4881 scope.go:117] "RemoveContainer" containerID="e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13" Jan 26 12:36:35 crc kubenswrapper[4881]: E0126 12:36:35.083793 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.129244 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.129294 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.129303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.129318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.129328 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:35Z","lastTransitionTime":"2026-01-26T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.231948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.232027 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.232051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.232075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.232094 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:35Z","lastTransitionTime":"2026-01-26T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.334917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.334966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.334978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.334995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.335011 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:35Z","lastTransitionTime":"2026-01-26T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.437326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.437386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.437398 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.437423 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.437434 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:35Z","lastTransitionTime":"2026-01-26T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.503616 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csrkv_d24cc7d2-c2db-45ee-b405-fa56157f807c/kube-multus/0.log" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.503685 4881 generic.go:334] "Generic (PLEG): container finished" podID="d24cc7d2-c2db-45ee-b405-fa56157f807c" containerID="e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81" exitCode=1 Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.503733 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csrkv" event={"ID":"d24cc7d2-c2db-45ee-b405-fa56157f807c","Type":"ContainerDied","Data":"e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81"} Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.504322 4881 scope.go:117] "RemoveContainer" containerID="e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.526845 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.542543 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.542601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.542618 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.542642 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.542654 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:35Z","lastTransitionTime":"2026-01-26T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.548351 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.567376 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.590158 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.620667 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.633873 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:35Z\\\",\\\"message\\\":\\\"2026-01-26T12:35:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e\\\\n2026-01-26T12:35:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e to /host/opt/cni/bin/\\\\n2026-01-26T12:35:49Z [verbose] multus-daemon started\\\\n2026-01-26T12:35:49Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:36:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.645983 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.646025 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.646039 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.646061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.646071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:35Z","lastTransitionTime":"2026-01-26T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.665393 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:20Z\\\",\\\"message\\\":\\\"lace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:20.107557 6521 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:20.104835 6521 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.680277 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.695332 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caa549f9-799c-43a9-a2a5-315713594704\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3b1592342c8b506c26109958ea6b4770bc41e6dde0be57603730427216aec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a599fa1a831e0d46ab1af51dc83b3c5ef566caa4d05629e344102fc4d852af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa21f92dbc9916c1d8eeb219803f791ad323f1cf82e7cb6fa64fe467b585c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.709489 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.723826 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.739424 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.748646 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.748699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.748712 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.748729 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.748742 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:35Z","lastTransitionTime":"2026-01-26T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.751069 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.765396 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.778360 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.791340 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.803783 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.816693 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.850969 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.851018 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.851032 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.851056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.851069 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:35Z","lastTransitionTime":"2026-01-26T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.954113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.954152 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.954161 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.954176 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:35 crc kubenswrapper[4881]: I0126 12:36:35.954187 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:35Z","lastTransitionTime":"2026-01-26T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.056829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.056878 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.056891 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.056910 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.056923 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:36Z","lastTransitionTime":"2026-01-26T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.060976 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:11:37.730769446 +0000 UTC Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.082411 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.082456 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.082558 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:36 crc kubenswrapper[4881]: E0126 12:36:36.082618 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:36 crc kubenswrapper[4881]: E0126 12:36:36.082723 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:36 crc kubenswrapper[4881]: E0126 12:36:36.082878 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.159896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.159956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.159967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.159982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.159992 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:36Z","lastTransitionTime":"2026-01-26T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.262775 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.262815 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.262824 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.262838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.262848 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:36Z","lastTransitionTime":"2026-01-26T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.364670 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.364722 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.364733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.364750 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.364763 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:36Z","lastTransitionTime":"2026-01-26T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.466912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.466955 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.466967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.466983 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.466996 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:36Z","lastTransitionTime":"2026-01-26T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.508694 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csrkv_d24cc7d2-c2db-45ee-b405-fa56157f807c/kube-multus/0.log" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.508803 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csrkv" event={"ID":"d24cc7d2-c2db-45ee-b405-fa56157f807c","Type":"ContainerStarted","Data":"1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.524610 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caa549f9-799c-43a9-a2a5-315713594704\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3b1592342c8b506c26109958ea6b4770bc41e6dde0be57603730427216aec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a599fa1a831e0d46ab1af51dc83b3c5ef566caa4d05629e344102fc4d852af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa21f92dbc9916c1d8eeb219803f791ad323f1cf82e7cb6fa64fe467b585c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.540252 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.558458 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:35Z\\\",\\\"message\\\":\\\"2026-01-26T12:35:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e\\\\n2026-01-26T12:35:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e to /host/opt/cni/bin/\\\\n2026-01-26T12:35:49Z [verbose] multus-daemon started\\\\n2026-01-26T12:35:49Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:36:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.569713 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.569761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.569772 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.569789 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.569800 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:36Z","lastTransitionTime":"2026-01-26T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.580051 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:20Z\\\",\\\"message\\\":\\\"lace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:20.107557 6521 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:20.104835 6521 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.593847 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.608187 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.627658 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.645136 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.659167 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.672483 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.672538 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.672548 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.672562 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.672571 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:36Z","lastTransitionTime":"2026-01-26T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.676476 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.686848 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.698075 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.710476 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.721719 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.742099 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.763813 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.775738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.776067 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.776140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.776228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.776301 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:36Z","lastTransitionTime":"2026-01-26T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.782226 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.798253 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:36Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.880178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.880218 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.880230 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.880251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.880263 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:36Z","lastTransitionTime":"2026-01-26T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.983269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.983317 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.983331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.983350 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:36 crc kubenswrapper[4881]: I0126 12:36:36.983364 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:36Z","lastTransitionTime":"2026-01-26T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.062010 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:03:34.425473548 +0000 UTC Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.082425 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:37 crc kubenswrapper[4881]: E0126 12:36:37.082899 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.085897 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.085956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.085970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.085994 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.086014 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:37Z","lastTransitionTime":"2026-01-26T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.189874 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.189945 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.189956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.189982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.189999 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:37Z","lastTransitionTime":"2026-01-26T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.292761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.293081 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.293162 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.293247 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.293399 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:37Z","lastTransitionTime":"2026-01-26T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.396136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.396183 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.396194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.396210 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.396224 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:37Z","lastTransitionTime":"2026-01-26T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.498934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.498985 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.499000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.499020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.499037 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:37Z","lastTransitionTime":"2026-01-26T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.601789 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.601827 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.601837 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.601853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.601864 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:37Z","lastTransitionTime":"2026-01-26T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.704127 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.704167 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.704178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.704194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.704208 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:37Z","lastTransitionTime":"2026-01-26T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.806282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.806318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.806329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.806345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.806356 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:37Z","lastTransitionTime":"2026-01-26T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.908989 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.909029 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.909040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.909054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:37 crc kubenswrapper[4881]: I0126 12:36:37.909065 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:37Z","lastTransitionTime":"2026-01-26T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.012016 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.012059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.012071 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.012086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.012099 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:38Z","lastTransitionTime":"2026-01-26T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.063151 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:32:31.094018489 +0000 UTC Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.081502 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.081573 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:38 crc kubenswrapper[4881]: E0126 12:36:38.081651 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.081662 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:38 crc kubenswrapper[4881]: E0126 12:36:38.081765 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:38 crc kubenswrapper[4881]: E0126 12:36:38.081820 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.095794 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.110768 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.114696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.114733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.114743 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.114759 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.114770 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:38Z","lastTransitionTime":"2026-01-26T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.121541 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.137036 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.148670 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.162455 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.173108 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.182571 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.200269 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.215549 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.216336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.216433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.216494 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.216606 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.216670 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:38Z","lastTransitionTime":"2026-01-26T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.227485 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.238677 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.249471 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.260270 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.275203 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:35Z\\\",\\\"message\\\":\\\"2026-01-26T12:35:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e\\\\n2026-01-26T12:35:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e to /host/opt/cni/bin/\\\\n2026-01-26T12:35:49Z [verbose] multus-daemon started\\\\n2026-01-26T12:35:49Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:36:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.294131 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:20Z\\\",\\\"message\\\":\\\"lace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:20.107557 6521 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:20.104835 6521 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.305423 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.317566 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caa549f9-799c-43a9-a2a5-315713594704\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3b1592342c8b506c26109958ea6b4770bc41e6dde0be57603730427216aec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a599fa1a831e0d46ab1af51dc83b3c5ef566caa4d05629e344102fc4d852af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa21f92dbc9916c1d8eeb219803f791ad323f1cf82e7cb6fa64fe467b585c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:38Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.319242 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.319280 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.319292 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.319309 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.319320 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:38Z","lastTransitionTime":"2026-01-26T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.421972 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.422009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.422017 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.422032 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.422041 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:38Z","lastTransitionTime":"2026-01-26T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.523718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.523757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.523766 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.523780 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.523790 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:38Z","lastTransitionTime":"2026-01-26T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.626454 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.626488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.626499 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.626529 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.626540 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:38Z","lastTransitionTime":"2026-01-26T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.729246 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.729283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.729295 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.729311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.729322 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:38Z","lastTransitionTime":"2026-01-26T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.832271 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.832323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.832339 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.832356 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.832377 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:38Z","lastTransitionTime":"2026-01-26T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.934901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.934936 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.934946 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.934961 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:38 crc kubenswrapper[4881]: I0126 12:36:38.934972 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:38Z","lastTransitionTime":"2026-01-26T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.037376 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.037411 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.037420 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.037433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.037444 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:39Z","lastTransitionTime":"2026-01-26T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.063848 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:52:59.512825979 +0000 UTC Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.082403 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:39 crc kubenswrapper[4881]: E0126 12:36:39.082755 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.140362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.140397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.140408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.140421 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.140431 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:39Z","lastTransitionTime":"2026-01-26T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.242717 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.242782 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.242800 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.242823 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.242840 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:39Z","lastTransitionTime":"2026-01-26T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.345351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.345383 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.345394 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.345410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.345420 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:39Z","lastTransitionTime":"2026-01-26T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.447629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.447891 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.448020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.448133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.448223 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:39Z","lastTransitionTime":"2026-01-26T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.552686 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.552725 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.552735 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.552753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.552763 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:39Z","lastTransitionTime":"2026-01-26T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.655246 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.655511 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.655593 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.655654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.655720 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:39Z","lastTransitionTime":"2026-01-26T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.758455 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.758491 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.758503 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.758543 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.758556 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:39Z","lastTransitionTime":"2026-01-26T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.860448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.860509 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.860552 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.860575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.860591 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:39Z","lastTransitionTime":"2026-01-26T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.965889 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.966009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.966029 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.966055 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:39 crc kubenswrapper[4881]: I0126 12:36:39.966077 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:39Z","lastTransitionTime":"2026-01-26T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.065073 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:50:36.965487283 +0000 UTC Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.068879 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.068928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.068940 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.068956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.068969 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.081898 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.081940 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.081909 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:40 crc kubenswrapper[4881]: E0126 12:36:40.082047 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:40 crc kubenswrapper[4881]: E0126 12:36:40.082167 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:40 crc kubenswrapper[4881]: E0126 12:36:40.082253 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.171471 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.171577 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.171602 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.171642 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.171667 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.274022 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.274067 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.274080 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.274099 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.274111 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.376163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.376202 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.376213 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.376227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.376240 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.479633 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.479672 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.479684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.479699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.479709 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.561371 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.561654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.561758 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.561869 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.561966 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: E0126 12:36:40.574973 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:40Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.579030 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.579065 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.579074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.579088 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.579098 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: E0126 12:36:40.592940 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:40Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.596335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.596365 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.596373 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.596386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.596395 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: E0126 12:36:40.607602 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:40Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.611426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.611458 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.611469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.611484 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.611495 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: E0126 12:36:40.623249 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:40Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.627361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.627395 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.627407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.627421 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.627431 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: E0126 12:36:40.640922 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:40Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:40 crc kubenswrapper[4881]: E0126 12:36:40.641035 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.642325 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.642350 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.642361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.642375 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.642385 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.744436 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.744751 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.744835 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.744919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.745009 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.847621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.847662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.847672 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.847686 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.847696 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.950647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.950696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.950707 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.950722 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:40 crc kubenswrapper[4881]: I0126 12:36:40.950732 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:40Z","lastTransitionTime":"2026-01-26T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.053116 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.053482 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.053624 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.053749 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.053878 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:41Z","lastTransitionTime":"2026-01-26T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.065280 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:40:41.382617842 +0000 UTC Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.081744 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:41 crc kubenswrapper[4881]: E0126 12:36:41.081877 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.156480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.156847 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.156917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.156992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.157053 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:41Z","lastTransitionTime":"2026-01-26T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.262232 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.262269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.262278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.262296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.262305 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:41Z","lastTransitionTime":"2026-01-26T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.365186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.365234 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.365246 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.365264 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.365276 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:41Z","lastTransitionTime":"2026-01-26T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.468032 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.468106 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.468129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.468163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.468189 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:41Z","lastTransitionTime":"2026-01-26T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.570205 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.570273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.570295 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.570360 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.570385 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:41Z","lastTransitionTime":"2026-01-26T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.672575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.672622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.672633 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.672647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.672656 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:41Z","lastTransitionTime":"2026-01-26T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.775053 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.775125 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.775145 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.775174 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.775198 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:41Z","lastTransitionTime":"2026-01-26T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.877732 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.877774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.877783 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.877797 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.877807 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:41Z","lastTransitionTime":"2026-01-26T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.980662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.980706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.980717 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.980733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:41 crc kubenswrapper[4881]: I0126 12:36:41.980744 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:41Z","lastTransitionTime":"2026-01-26T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.065983 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:39:09.66482417 +0000 UTC Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.081456 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.081489 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:42 crc kubenswrapper[4881]: E0126 12:36:42.081616 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.081665 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:42 crc kubenswrapper[4881]: E0126 12:36:42.081718 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:42 crc kubenswrapper[4881]: E0126 12:36:42.081789 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.083541 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.083712 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.083857 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.083991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.084110 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:42Z","lastTransitionTime":"2026-01-26T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.187338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.187622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.187727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.187820 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.187922 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:42Z","lastTransitionTime":"2026-01-26T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.289819 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.289862 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.289873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.289889 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.289901 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:42Z","lastTransitionTime":"2026-01-26T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.392052 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.392095 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.392104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.392118 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.392129 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:42Z","lastTransitionTime":"2026-01-26T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.493948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.493995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.494005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.494046 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.494057 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:42Z","lastTransitionTime":"2026-01-26T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.596036 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.596069 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.596077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.596090 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.596098 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:42Z","lastTransitionTime":"2026-01-26T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.698979 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.699052 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.699075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.699104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.699186 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:42Z","lastTransitionTime":"2026-01-26T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.802085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.802171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.802189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.802214 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.802234 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:42Z","lastTransitionTime":"2026-01-26T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.905836 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.905924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.905944 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.905967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:42 crc kubenswrapper[4881]: I0126 12:36:42.905982 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:42Z","lastTransitionTime":"2026-01-26T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.011446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.011849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.012004 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.012158 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.012303 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:43Z","lastTransitionTime":"2026-01-26T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.066855 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:40:48.631922176 +0000 UTC Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.082339 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:43 crc kubenswrapper[4881]: E0126 12:36:43.082835 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.114906 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.114950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.114966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.114992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.115010 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:43Z","lastTransitionTime":"2026-01-26T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.252260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.252321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.252349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.252374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.252395 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:43Z","lastTransitionTime":"2026-01-26T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.356144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.356198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.356216 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.356240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.356258 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:43Z","lastTransitionTime":"2026-01-26T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.459305 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.459360 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.459376 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.459398 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.459415 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:43Z","lastTransitionTime":"2026-01-26T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.562573 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.562667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.562689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.562716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.562733 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:43Z","lastTransitionTime":"2026-01-26T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.666097 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.666178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.666203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.666229 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.666246 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:43Z","lastTransitionTime":"2026-01-26T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.770021 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.770123 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.770158 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.770191 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.770215 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:43Z","lastTransitionTime":"2026-01-26T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.873290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.873346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.873739 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.873780 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.873843 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:43Z","lastTransitionTime":"2026-01-26T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.977693 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.977765 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.977790 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.977820 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:43 crc kubenswrapper[4881]: I0126 12:36:43.977842 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:43Z","lastTransitionTime":"2026-01-26T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.067609 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:48:55.845581614 +0000 UTC Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.081473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.081558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.081572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.081591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.081604 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:44Z","lastTransitionTime":"2026-01-26T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.081804 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.081837 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:44 crc kubenswrapper[4881]: E0126 12:36:44.081924 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.082004 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:44 crc kubenswrapper[4881]: E0126 12:36:44.082139 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:44 crc kubenswrapper[4881]: E0126 12:36:44.082216 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.185170 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.185221 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.185232 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.185247 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.185259 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:44Z","lastTransitionTime":"2026-01-26T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.287502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.287571 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.287584 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.287603 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.287616 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:44Z","lastTransitionTime":"2026-01-26T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.391492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.391558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.391568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.391582 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.391591 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:44Z","lastTransitionTime":"2026-01-26T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.494134 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.494175 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.494186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.494201 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.494212 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:44Z","lastTransitionTime":"2026-01-26T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.596408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.596461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.596486 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.596506 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.596543 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:44Z","lastTransitionTime":"2026-01-26T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.699039 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.699083 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.699092 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.699108 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.699116 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:44Z","lastTransitionTime":"2026-01-26T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.801833 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.801876 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.801888 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.801905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.801917 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:44Z","lastTransitionTime":"2026-01-26T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.905076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.905145 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.905164 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.905186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:44 crc kubenswrapper[4881]: I0126 12:36:44.905204 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:44Z","lastTransitionTime":"2026-01-26T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.008000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.008028 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.008036 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.008049 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.008057 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:45Z","lastTransitionTime":"2026-01-26T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.068335 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:54:38.050031975 +0000 UTC Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.081628 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:45 crc kubenswrapper[4881]: E0126 12:36:45.081902 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.110753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.110826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.110845 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.110872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.110890 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:45Z","lastTransitionTime":"2026-01-26T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.213080 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.213117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.213127 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.213165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.213177 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:45Z","lastTransitionTime":"2026-01-26T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.316092 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.316258 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.316274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.316293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.316306 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:45Z","lastTransitionTime":"2026-01-26T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.419181 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.419265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.419290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.419324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.419347 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:45Z","lastTransitionTime":"2026-01-26T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.523003 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.523056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.523087 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.523111 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.523131 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:45Z","lastTransitionTime":"2026-01-26T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.625356 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.625396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.625404 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.625417 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.625426 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:45Z","lastTransitionTime":"2026-01-26T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.728339 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.728382 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.728394 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.728412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.728425 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:45Z","lastTransitionTime":"2026-01-26T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.831288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.831329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.831340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.831355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.831367 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:45Z","lastTransitionTime":"2026-01-26T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.934594 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.934635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.934646 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.934662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:45 crc kubenswrapper[4881]: I0126 12:36:45.934672 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:45Z","lastTransitionTime":"2026-01-26T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.037690 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.037736 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.037748 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.037765 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.037776 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:46Z","lastTransitionTime":"2026-01-26T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.069112 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:30:50.608522476 +0000 UTC Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.083482 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:46 crc kubenswrapper[4881]: E0126 12:36:46.083616 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.083774 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:46 crc kubenswrapper[4881]: E0126 12:36:46.083828 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.083930 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:46 crc kubenswrapper[4881]: E0126 12:36:46.083969 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.140736 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.140812 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.140826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.140871 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.140884 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:46Z","lastTransitionTime":"2026-01-26T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.243155 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.243214 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.243226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.243242 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.243254 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:46Z","lastTransitionTime":"2026-01-26T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.346339 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.346448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.346485 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.346574 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.346617 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:46Z","lastTransitionTime":"2026-01-26T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.449239 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.449298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.449315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.449338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.449356 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:46Z","lastTransitionTime":"2026-01-26T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.552327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.552372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.552383 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.552399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.552410 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:46Z","lastTransitionTime":"2026-01-26T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.654936 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.655012 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.655028 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.655096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.655112 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:46Z","lastTransitionTime":"2026-01-26T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.756791 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.756825 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.756834 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.756848 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.756858 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:46Z","lastTransitionTime":"2026-01-26T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.860949 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.861004 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.861018 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.861041 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.861056 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:46Z","lastTransitionTime":"2026-01-26T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.963399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.963442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.963454 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.963473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:46 crc kubenswrapper[4881]: I0126 12:36:46.963486 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:46Z","lastTransitionTime":"2026-01-26T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.066094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.066143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.066154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.066173 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.066186 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:47Z","lastTransitionTime":"2026-01-26T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.070270 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:01:10.461864182 +0000 UTC Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.082285 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:47 crc kubenswrapper[4881]: E0126 12:36:47.082443 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.169146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.169195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.169211 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.169236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.169252 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:47Z","lastTransitionTime":"2026-01-26T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.272451 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.272565 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.272579 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.272598 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.272634 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:47Z","lastTransitionTime":"2026-01-26T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.376492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.376546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.376555 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.376571 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.376583 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:47Z","lastTransitionTime":"2026-01-26T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.480406 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.480540 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.480557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.480580 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.480594 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:47Z","lastTransitionTime":"2026-01-26T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.583764 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.583832 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.583855 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.583885 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.583907 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:47Z","lastTransitionTime":"2026-01-26T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.686978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.687299 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.687436 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.687612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.687761 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:47Z","lastTransitionTime":"2026-01-26T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.790646 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.790715 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.790738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.790767 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.790791 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:47Z","lastTransitionTime":"2026-01-26T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.894046 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.894135 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.894167 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.894195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.894213 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:47Z","lastTransitionTime":"2026-01-26T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.997034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.997098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.997116 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.997140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:47 crc kubenswrapper[4881]: I0126 12:36:47.997159 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:47Z","lastTransitionTime":"2026-01-26T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.070770 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:29:40.562240052 +0000 UTC Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.082094 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:48 crc kubenswrapper[4881]: E0126 12:36:48.082251 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.082399 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.082482 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:48 crc kubenswrapper[4881]: E0126 12:36:48.082564 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:48 crc kubenswrapper[4881]: E0126 12:36:48.082777 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.100173 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.100589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.100629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.100645 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.100669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.100810 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:48Z","lastTransitionTime":"2026-01-26T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.126658 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.145805 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.162395 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.179635 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.193614 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caa549f9-799c-43a9-a2a5-315713594704\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3b1592342c8b506c26109958ea6b4770bc41e6dde0be57603730427216aec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a599fa1a831e0d46ab1af51dc83b3c5ef566caa4d05629e344102fc4d852af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa21f92dbc9916c1d8eeb219803f791ad323f1cf82e7cb6fa64fe467b585c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.204930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.204992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.205009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.205033 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.205049 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:48Z","lastTransitionTime":"2026-01-26T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.211132 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.231646 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:35Z\\\",\\\"message\\\":\\\"2026-01-26T12:35:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e\\\\n2026-01-26T12:35:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e to /host/opt/cni/bin/\\\\n2026-01-26T12:35:49Z [verbose] multus-daemon started\\\\n2026-01-26T12:35:49Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:36:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.260762 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:20Z\\\",\\\"message\\\":\\\"lace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:20.107557 6521 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:20.104835 6521 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.276824 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.292692 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.307871 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.307972 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.307992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.308058 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.308079 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:48Z","lastTransitionTime":"2026-01-26T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.309504 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.324744 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.337994 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.358192 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.371583 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.386116 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.400295 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:48Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.411932 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.412148 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.412219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.412304 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.412395 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:48Z","lastTransitionTime":"2026-01-26T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.514750 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.514800 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.514815 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.514833 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.514846 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:48Z","lastTransitionTime":"2026-01-26T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.618094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.618153 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.618171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.618195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.618213 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:48Z","lastTransitionTime":"2026-01-26T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.721482 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.721560 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.721575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.721595 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.721610 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:48Z","lastTransitionTime":"2026-01-26T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.824146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.824193 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.824206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.824226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.824240 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:48Z","lastTransitionTime":"2026-01-26T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.927060 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.927121 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.927139 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.927165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:48 crc kubenswrapper[4881]: I0126 12:36:48.927183 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:48Z","lastTransitionTime":"2026-01-26T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.030331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.030394 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.030415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.030441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.030459 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:49Z","lastTransitionTime":"2026-01-26T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.071505 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:13:20.103631032 +0000 UTC Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.082077 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.082237 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.083663 4881 scope.go:117] "RemoveContainer" containerID="e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.132982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.133415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.133429 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.133448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.133460 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:49Z","lastTransitionTime":"2026-01-26T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.236054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.236330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.236411 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.236476 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.236572 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:49Z","lastTransitionTime":"2026-01-26T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.338687 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.338997 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.339139 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.339286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.339409 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:49Z","lastTransitionTime":"2026-01-26T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.442191 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.442607 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.442820 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.442966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.443092 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:49Z","lastTransitionTime":"2026-01-26T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.546407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.546812 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.547039 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.547200 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.547438 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:49Z","lastTransitionTime":"2026-01-26T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.650688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.651080 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.651344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.651638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.651818 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:49Z","lastTransitionTime":"2026-01-26T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.756106 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.756151 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.756166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.756182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.756195 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:49Z","lastTransitionTime":"2026-01-26T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.862090 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.862124 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.862133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.862147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.862157 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:49Z","lastTransitionTime":"2026-01-26T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.964938 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.965003 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.965023 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.965049 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.965065 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:49Z","lastTransitionTime":"2026-01-26T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.984875 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.985103 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.985150 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.985207 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985250 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.9852093 +0000 UTC m=+146.464519346 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985350 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:36:49 crc kubenswrapper[4881]: I0126 12:36:49.985374 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985417 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985436 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.985411175 +0000 UTC m=+146.464721201 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985439 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985465 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985494 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.985488216 +0000 UTC m=+146.464798242 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985563 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985576 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985594 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985624 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985694 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.985669651 +0000 UTC m=+146.464979677 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:36:49 crc kubenswrapper[4881]: E0126 12:36:49.985720 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.985709191 +0000 UTC m=+146.465019207 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.069168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.069264 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.069287 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.069356 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.069421 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.072212 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:10:28.942071877 +0000 UTC Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.081813 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.081878 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.081896 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:50 crc kubenswrapper[4881]: E0126 12:36:50.082025 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:50 crc kubenswrapper[4881]: E0126 12:36:50.082124 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:50 crc kubenswrapper[4881]: E0126 12:36:50.082209 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.171799 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.171841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.171853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.171872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.171884 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.275464 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.275503 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.275564 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.275584 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.275597 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.381411 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.381445 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.381456 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.381473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.381484 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.483567 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.483604 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.483615 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.483633 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.483645 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.557166 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/2.log" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.563682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.564315 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.576941 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.587339 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.587383 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.587399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.587434 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.587448 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.589551 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.602616 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.622506 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.634832 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.650704 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.667787 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.685811 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caa549f9-799c-43a9-a2a5-315713594704\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3b1592342c8b506c26109958ea6b4770bc41e6dde0be57603730427216aec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a599fa1a831e0d46ab1af51dc83b3c5ef566caa4d05629e344102fc4d852af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa21f92dbc9916c1d8eeb219803f791ad323f1cf82e7cb6fa64fe467b585c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.689610 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.689661 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.689673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.689693 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.689705 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.703005 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.717108 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:35Z\\\",\\\"message\\\":\\\"2026-01-26T12:35:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e\\\\n2026-01-26T12:35:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e to /host/opt/cni/bin/\\\\n2026-01-26T12:35:49Z [verbose] multus-daemon started\\\\n2026-01-26T12:35:49Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:36:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.741479 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:20Z\\\",\\\"message\\\":\\\"lace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:20.107557 6521 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:20.104835 6521 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.754975 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.770487 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.783804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.783853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.783866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.783883 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.783898 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.784354 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: E0126 12:36:50.801404 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.804494 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.805447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.805475 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.805486 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.805502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.805531 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: E0126 12:36:50.817371 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.819273 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.821401 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.821439 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.821450 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.821466 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.821477 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: E0126 12:36:50.838450 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.838725 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.844701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.844729 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.844737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.844751 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.844760 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.860464 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: E0126 12:36:50.865558 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.873097 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.873142 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.873155 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.873175 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.873187 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: E0126 12:36:50.892024 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"69f11506-d189-4146-9efa-f9280470e789\\\",\\\"systemUUID\\\":\\\"d2cf9e5e-9cbb-41d3-9ac1-608b9dce0d9f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:50Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:50 crc kubenswrapper[4881]: E0126 12:36:50.892545 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.894268 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.894297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.894306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.894319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.894329 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.997731 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.997778 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.997790 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.997809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:50 crc kubenswrapper[4881]: I0126 12:36:50.997823 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:50Z","lastTransitionTime":"2026-01-26T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.072806 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:26:54.156311285 +0000 UTC Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.082131 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:51 crc kubenswrapper[4881]: E0126 12:36:51.082283 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.100476 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.100565 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.100590 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.100618 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.100642 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:51Z","lastTransitionTime":"2026-01-26T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.203597 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.203643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.203656 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.203670 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.203680 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:51Z","lastTransitionTime":"2026-01-26T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.305651 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.305695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.305706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.305723 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.305734 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:51Z","lastTransitionTime":"2026-01-26T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.408970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.409028 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.409045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.409069 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.409087 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:51Z","lastTransitionTime":"2026-01-26T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.512267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.512335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.512358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.512391 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.512410 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:51Z","lastTransitionTime":"2026-01-26T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.570852 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/3.log" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.572145 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/2.log" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.576099 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" exitCode=1 Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.576163 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087"} Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.576220 4881 scope.go:117] "RemoveContainer" containerID="e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.577408 4881 scope.go:117] "RemoveContainer" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:36:51 crc kubenswrapper[4881]: E0126 12:36:51.577796 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.604259 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.615188 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.615232 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.615246 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.615263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.615279 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:51Z","lastTransitionTime":"2026-01-26T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.634643 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.650552 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.665761 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.679009 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.693906 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caa549f9-799c-43a9-a2a5-315713594704\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3b1592342c8b506c26109958ea6b4770bc41e6dde0be57603730427216aec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a599fa1a831e0d46ab1af51dc83b3c5ef566caa4d05629e344102fc4d852af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa21f92dbc9916c1d8eeb219803f791ad323f1cf82e7cb6fa64fe467b585c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.710701 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.717948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.717993 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.718007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.718027 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.718042 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:51Z","lastTransitionTime":"2026-01-26T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.726829 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:35Z\\\",\\\"message\\\":\\\"2026-01-26T12:35:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e\\\\n2026-01-26T12:35:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e to /host/opt/cni/bin/\\\\n2026-01-26T12:35:49Z [verbose] multus-daemon started\\\\n2026-01-26T12:35:49Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:36:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.749395 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8de259073576fa26bb1c649ceca0c0716e69ee10b38051b1ffe0b7aa095bc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:20Z\\\",\\\"message\\\":\\\"lace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:36:20.107557 6521 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:36:20.104835 6521 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:51Z\\\",\\\"message\\\":\\\" 6958 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-csrkv\\\\nI0126 12:36:50.910408 6958 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0126 12:36:50.911038 6958 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0126 12:36:50.911057 6958 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 2.880467ms\\\\nI0126 12:36:50.911065 6958 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0126 12:36:50.911063 6958 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.netw\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.765311 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.778641 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.791573 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.809727 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.820713 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.820767 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.820780 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.820802 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.820814 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:51Z","lastTransitionTime":"2026-01-26T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.828178 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.848003 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.865747 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.883041 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.900550 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:51Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.924710 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.924756 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.924766 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.924782 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:51 crc kubenswrapper[4881]: I0126 12:36:51.924794 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:51Z","lastTransitionTime":"2026-01-26T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.027057 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.027298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.027306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.027318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.027328 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:52Z","lastTransitionTime":"2026-01-26T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.074250 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:51:18.712608699 +0000 UTC Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.081622 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.081701 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.081758 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:52 crc kubenswrapper[4881]: E0126 12:36:52.081773 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:52 crc kubenswrapper[4881]: E0126 12:36:52.081844 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:52 crc kubenswrapper[4881]: E0126 12:36:52.081930 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.129421 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.129472 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.129482 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.129499 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.129548 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:52Z","lastTransitionTime":"2026-01-26T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.232139 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.232189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.232203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.232220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.232232 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:52Z","lastTransitionTime":"2026-01-26T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.334807 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.334901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.334914 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.334933 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.334947 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:52Z","lastTransitionTime":"2026-01-26T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.437111 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.437160 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.437171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.437187 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.437200 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:52Z","lastTransitionTime":"2026-01-26T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.540013 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.540109 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.540199 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.540234 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.540247 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:52Z","lastTransitionTime":"2026-01-26T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.580977 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/3.log" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.584389 4881 scope.go:117] "RemoveContainer" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:36:52 crc kubenswrapper[4881]: E0126 12:36:52.584758 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.597808 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02fca4ac1029e236fa1f5c8b73ec4cf6f627aba503ba27d28dd2566e2f9d332a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c6n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fwlbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.609917 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939b570a-38ce-49f8-8518-1ab500c4e449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5367bd65eab3f1d86fce660f0126a0b2e6d4f1f7f803f3bfd38a3293bdff7267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d4049c90948b57aad819839a207210919dce46514ca41bf1afdaed9b04a409a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwjlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m2c5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.622464 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.639614 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee20042b-95d1-49f2-abea-e339efdb096f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce7427a811572a8f3810d59e072888d97a8f15e2c493297a25418b85e1f9f16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8138683a9408b1a7542f9f29a3fd0b83a06f10b000c685d9d83b72b23ba4c55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d3f2c93382cde3d932117276753f004c58e6e52bce611119b79d9e76eb7654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.642134 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.642181 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.642194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.642210 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.642222 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:52Z","lastTransitionTime":"2026-01-26T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.657495 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6f7953-daf2-4ba4-8c10-7de3f4ea07a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f54285b7646c2fa5da34439d2812a5669f168356446852b9bff249107a7c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ddc5e5c3c4b0c18219fe6a8297ff2b1646b9621823f529972781f915d81c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://533f45172a004ad2bd598849b711b38e823485188ff4163dd0dbc5ae5a28cb24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c8799e63e6072f9dc0580f3efbfe330a9f705a689b2785c60f64391881b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c525c27678ba77145459086b6d3d2fee5d471c7393f34215e9ae46d9d72cf4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667695541b7d707a83aa0080723518aeae689b140791f0121a4ce6d485f39138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab83101bb56dc3eaf3a0514f60da45b0c5304a491184fe71584d7ed72bc6e59c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52793db25051b2c8eec3dbeaf0547ec04ec438ce426f3dfcd2383792f846644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.673642 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9aa1877-c239-4157-938d-e5c85ff3e76a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 12:35:40.710806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 12:35:40.716794 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1303051152/tls.crt::/tmp/serving-cert-1303051152/tls.key\\\\\\\"\\\\nI0126 12:35:46.328409 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 12:35:46.330270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 12:35:46.330287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 12:35:46.330308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 12:35:46.330314 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 12:35:46.335670 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 12:35:46.335689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335693 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 12:35:46.335697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 12:35:46.335700 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 12:35:46.335702 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 12:35:46.335705 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 12:35:46.335779 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 12:35:46.339578 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.686541 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8116d483190921916638bd630fb718840fdc36eb9522aecb7ef0e9bfebe9bb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.707867 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d272c950-9665-4b60-98a2-20c18d02d5a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:51Z\\\",\\\"message\\\":\\\" 6958 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-csrkv\\\\nI0126 12:36:50.910408 6958 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0126 12:36:50.911038 6958 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0126 12:36:50.911057 6958 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 2.880467ms\\\\nI0126 12:36:50.911065 6958 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0126 12:36:50.911063 6958 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.netw\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:36:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crn6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kbjm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.719785 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5zct6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"640554c2-37e2-425f-b182-aa9b9d6fa4d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl97t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:36:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5zct6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.733291 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caa549f9-799c-43a9-a2a5-315713594704\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d3b1592342c8b506c26109958ea6b4770bc41e6dde0be57603730427216aec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a599fa1a831e0d46ab1af51dc83b3c5ef566caa4d05629e344102fc4d852af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa21f92dbc9916c1d8eeb219803f791ad323f1cf82e7cb6fa64fe467b585c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b154f896826abcb669a2628d916a874072a707d60fde45aad3ae0ff16bb4e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.744939 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.744984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.744996 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.745017 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.745031 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:52Z","lastTransitionTime":"2026-01-26T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.745686 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f87073bcd7b2be8285c037c00f3112e1f1c6ed16cd4caf9a06c1a18d7e07c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.758078 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csrkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d24cc7d2-c2db-45ee-b405-fa56157f807c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:36:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:36:35Z\\\",\\\"message\\\":\\\"2026-01-26T12:35:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e\\\\n2026-01-26T12:35:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e8d50ce-7902-42ed-a950-c3be80066f4e to /host/opt/cni/bin/\\\\n2026-01-26T12:35:49Z [verbose] multus-daemon started\\\\n2026-01-26T12:35:49Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:36:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x7bhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csrkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.774443 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb5ecb63-1238-44dc-9c40-b5e5dd7d4847\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://428d01799872a00b628c81145d5fbeac8707bad0afa7563e16bc44686fd7c18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54e7d904bec7e9817b662ea1c59ede6589fde79754520dfb1e9fb82864e623d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e58ff1e13a164f50c9664f463b50cbf651ce8e5edac8cbff9a9e42626d8d39ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fca59d7df432a78a62142cd5e47d6a1a62b2c23ff7eec49d6297aacf30b530a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5b95e27428d9ca0382054957b1cc19e96fe8bb5622af5f7ebebebec53d2add\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45799bb6d037f31b82596651214f8bdf9c13346f1d9953b965a40b55d2588edd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bacfe34f01c5de9439ac6023539e6489ec63175e54099942c2fb2fe4250bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:35:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:35:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwf7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pmwpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.788201 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tvrtr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ba5262-b5d8-4c86-85db-0993c88afc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d6070117a94d139d5fc78fee4b38e1c75537675f27a663dcb2d6c8be285e9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tvrtr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.800393 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.812044 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.824913 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79282300691fcea19d9fe7ad9c661474c26ba5e445c1a4f009961641bf59fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2921f0cb7ad2096499d1c6c0e5f8f37da3620c8468c9fab08b57b4545e5dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.833193 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f4b5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017c2a17-2267-4284-a0ca-d3c513aa9ff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cea6c5f8bd80cf271df551539957d4c3e60378ff13dc806b3142e8e1453ab95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc482\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:35:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f4b5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:36:52Z is after 2025-08-24T17:21:41Z" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.853658 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.853701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.853711 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.853727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.853739 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:52Z","lastTransitionTime":"2026-01-26T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.956059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.956102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.956144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.956160 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:52 crc kubenswrapper[4881]: I0126 12:36:52.956170 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:52Z","lastTransitionTime":"2026-01-26T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.059141 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.059209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.059228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.059254 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.059271 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:53Z","lastTransitionTime":"2026-01-26T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.075436 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:46:31.508281254 +0000 UTC Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.081832 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:53 crc kubenswrapper[4881]: E0126 12:36:53.082036 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.162337 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.162384 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.162396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.162411 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.162422 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:53Z","lastTransitionTime":"2026-01-26T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.264907 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.264967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.265005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.265045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.265068 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:53Z","lastTransitionTime":"2026-01-26T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.367338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.367389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.367399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.367416 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.367426 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:53Z","lastTransitionTime":"2026-01-26T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.470208 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.470266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.470278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.470294 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.470305 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:53Z","lastTransitionTime":"2026-01-26T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.572497 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.572586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.572601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.572626 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.572639 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:53Z","lastTransitionTime":"2026-01-26T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.675591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.675654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.675668 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.675689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.675703 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:53Z","lastTransitionTime":"2026-01-26T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.777453 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.777502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.777538 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.777553 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.777563 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:53Z","lastTransitionTime":"2026-01-26T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.879993 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.880059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.880126 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.880150 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.880166 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:53Z","lastTransitionTime":"2026-01-26T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.982681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.982727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.982738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.982766 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:53 crc kubenswrapper[4881]: I0126 12:36:53.982782 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:53Z","lastTransitionTime":"2026-01-26T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.075929 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:12:34.291971612 +0000 UTC Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.082480 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.082556 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:54 crc kubenswrapper[4881]: E0126 12:36:54.082688 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.082713 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:54 crc kubenswrapper[4881]: E0126 12:36:54.082928 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:54 crc kubenswrapper[4881]: E0126 12:36:54.082995 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.085222 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.085284 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.085298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.085316 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.085328 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:54Z","lastTransitionTime":"2026-01-26T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.188040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.188091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.188103 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.188120 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.188134 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:54Z","lastTransitionTime":"2026-01-26T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.291010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.291062 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.291077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.291098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.291111 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:54Z","lastTransitionTime":"2026-01-26T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.394292 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.394356 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.394379 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.394408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.394430 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:54Z","lastTransitionTime":"2026-01-26T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.496649 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.496701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.496711 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.496732 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.496744 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:54Z","lastTransitionTime":"2026-01-26T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.598799 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.598841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.598855 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.598872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.598885 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:54Z","lastTransitionTime":"2026-01-26T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.701842 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.701905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.701922 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.701948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.701965 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:54Z","lastTransitionTime":"2026-01-26T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.804056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.804129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.804147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.804173 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.804191 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:54Z","lastTransitionTime":"2026-01-26T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.906594 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.906663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.906679 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.906705 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:54 crc kubenswrapper[4881]: I0126 12:36:54.906726 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:54Z","lastTransitionTime":"2026-01-26T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.010141 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.010222 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.010240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.010263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.010281 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:55Z","lastTransitionTime":"2026-01-26T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.076855 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:05:18.740656931 +0000 UTC Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.082335 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:55 crc kubenswrapper[4881]: E0126 12:36:55.082669 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.099340 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.113068 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.113106 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.113117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.113133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.113144 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:55Z","lastTransitionTime":"2026-01-26T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.216419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.216481 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.216503 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.216575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.216599 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:55Z","lastTransitionTime":"2026-01-26T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.319776 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.319826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.319838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.319857 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.319869 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:55Z","lastTransitionTime":"2026-01-26T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.423195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.423253 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.423264 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.423278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.423291 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:55Z","lastTransitionTime":"2026-01-26T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.526143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.526202 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.526214 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.526232 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.526245 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:55Z","lastTransitionTime":"2026-01-26T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.629655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.629720 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.629733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.629755 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.629773 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:55Z","lastTransitionTime":"2026-01-26T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.732610 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.732670 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.732683 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.732709 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.732722 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:55Z","lastTransitionTime":"2026-01-26T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.836003 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.836072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.836089 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.836114 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.836131 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:55Z","lastTransitionTime":"2026-01-26T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.939534 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.939601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.939613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.939638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:55 crc kubenswrapper[4881]: I0126 12:36:55.939652 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:55Z","lastTransitionTime":"2026-01-26T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.043818 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.043901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.043920 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.043943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.043964 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:56Z","lastTransitionTime":"2026-01-26T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.077137 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:12:14.516193713 +0000 UTC Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.082230 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.082257 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:56 crc kubenswrapper[4881]: E0126 12:36:56.082358 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.082565 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:56 crc kubenswrapper[4881]: E0126 12:36:56.082618 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:56 crc kubenswrapper[4881]: E0126 12:36:56.082799 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.147867 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.147943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.147964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.147995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.148019 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:56Z","lastTransitionTime":"2026-01-26T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.251343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.251392 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.251401 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.251418 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.251431 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:56Z","lastTransitionTime":"2026-01-26T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.354700 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.354762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.354784 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.354813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.354835 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:56Z","lastTransitionTime":"2026-01-26T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.458555 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.458605 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.458621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.458643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.458655 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:56Z","lastTransitionTime":"2026-01-26T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.561126 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.561183 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.561196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.561219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.561232 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:56Z","lastTransitionTime":"2026-01-26T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.663710 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.663765 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.663782 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.663810 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.663827 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:56Z","lastTransitionTime":"2026-01-26T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.766036 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.766088 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.766101 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.766120 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.766134 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:56Z","lastTransitionTime":"2026-01-26T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.869899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.869964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.869983 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.870014 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.870037 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:56Z","lastTransitionTime":"2026-01-26T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.973766 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.973847 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.973869 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.973902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:56 crc kubenswrapper[4881]: I0126 12:36:56.973924 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:56Z","lastTransitionTime":"2026-01-26T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.076766 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.076818 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.076833 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.076853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.076867 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:57Z","lastTransitionTime":"2026-01-26T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.077462 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:29:12.458040998 +0000 UTC Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.082449 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:57 crc kubenswrapper[4881]: E0126 12:36:57.082701 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.179289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.179373 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.179397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.179431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.179454 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:57Z","lastTransitionTime":"2026-01-26T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.282029 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.282102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.282117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.282136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.282152 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:57Z","lastTransitionTime":"2026-01-26T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.385771 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.385838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.385854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.385880 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.385897 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:57Z","lastTransitionTime":"2026-01-26T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.488089 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.488166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.488181 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.488230 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.488244 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:57Z","lastTransitionTime":"2026-01-26T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.592130 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.592191 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.592226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.592257 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.592278 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:57Z","lastTransitionTime":"2026-01-26T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.695267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.695344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.695366 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.695394 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.695415 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:57Z","lastTransitionTime":"2026-01-26T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.798779 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.798829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.798838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.798859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.798870 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:57Z","lastTransitionTime":"2026-01-26T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.902040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.902097 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.902114 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.902137 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:57 crc kubenswrapper[4881]: I0126 12:36:57.902156 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:57Z","lastTransitionTime":"2026-01-26T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.004766 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.004821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.004835 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.004910 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.004926 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:58Z","lastTransitionTime":"2026-01-26T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.077890 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:41:14.718980058 +0000 UTC Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.082622 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.082632 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.082751 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:36:58 crc kubenswrapper[4881]: E0126 12:36:58.083891 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:36:58 crc kubenswrapper[4881]: E0126 12:36:58.082911 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:36:58 crc kubenswrapper[4881]: E0126 12:36:58.083960 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.109358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.109402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.109414 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.109434 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.109446 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:58Z","lastTransitionTime":"2026-01-26T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.158506 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.158450187 podStartE2EDuration="37.158450187s" podCreationTimestamp="2026-01-26 12:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.158246552 +0000 UTC m=+90.637556578" watchObservedRunningTime="2026-01-26 12:36:58.158450187 +0000 UTC m=+90.637760203" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.188093 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.188074639 podStartE2EDuration="3.188074639s" podCreationTimestamp="2026-01-26 12:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.172040485 +0000 UTC m=+90.651350521" watchObservedRunningTime="2026-01-26 12:36:58.188074639 +0000 UTC m=+90.667384665" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.211224 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.211558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.211596 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.211612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.211624 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:58Z","lastTransitionTime":"2026-01-26T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.228540 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-csrkv" podStartSLOduration=71.228494285 podStartE2EDuration="1m11.228494285s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.205424166 +0000 UTC m=+90.684734212" watchObservedRunningTime="2026-01-26 12:36:58.228494285 +0000 UTC m=+90.707804321" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.244839 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pmwpn" podStartSLOduration=71.244821947 podStartE2EDuration="1m11.244821947s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.228623108 +0000 UTC m=+90.707933134" watchObservedRunningTime="2026-01-26 12:36:58.244821947 +0000 UTC m=+90.724131973" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.245146 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tvrtr" podStartSLOduration=71.245142304 podStartE2EDuration="1m11.245142304s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.24450653 +0000 UTC m=+90.723816576" watchObservedRunningTime="2026-01-26 12:36:58.245142304 +0000 UTC m=+90.724452330" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.298776 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-f4b5v" podStartSLOduration=71.298756279 podStartE2EDuration="1m11.298756279s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.298634335 +0000 UTC m=+90.777944371" watchObservedRunningTime="2026-01-26 12:36:58.298756279 +0000 UTC m=+90.778066305" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.310405 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podStartSLOduration=71.310387 podStartE2EDuration="1m11.310387s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.310001682 +0000 UTC m=+90.789311708" watchObservedRunningTime="2026-01-26 12:36:58.310387 +0000 UTC m=+90.789697026" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.313688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.313723 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.313733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.313750 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.313763 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:58Z","lastTransitionTime":"2026-01-26T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.340131 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m2c5" podStartSLOduration=70.340109105 podStartE2EDuration="1m10.340109105s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.323379634 +0000 UTC m=+90.802689660" watchObservedRunningTime="2026-01-26 12:36:58.340109105 +0000 UTC m=+90.819419131" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.356238 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.356214442 podStartE2EDuration="1m12.356214442s" podCreationTimestamp="2026-01-26 12:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.355831103 +0000 UTC m=+90.835141139" watchObservedRunningTime="2026-01-26 12:36:58.356214442 +0000 UTC m=+90.835524468" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.380328 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=69.380311286 podStartE2EDuration="1m9.380311286s" podCreationTimestamp="2026-01-26 12:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.379782483 +0000 UTC m=+90.859092519" watchObservedRunningTime="2026-01-26 12:36:58.380311286 +0000 UTC m=+90.859621302" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.411945 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.411900184 podStartE2EDuration="1m11.411900184s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:36:58.397845245 +0000 UTC m=+90.877155271" watchObservedRunningTime="2026-01-26 12:36:58.411900184 +0000 UTC m=+90.891210210" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.416645 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.416702 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.416713 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.416737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.416750 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:58Z","lastTransitionTime":"2026-01-26T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.519423 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.519464 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.519472 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.519487 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.519496 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:58Z","lastTransitionTime":"2026-01-26T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.621607 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.621655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.621671 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.621693 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.621710 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:58Z","lastTransitionTime":"2026-01-26T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.724249 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.724302 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.724319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.724342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.724360 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:58Z","lastTransitionTime":"2026-01-26T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.826425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.826462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.826470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.826486 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.826495 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:58Z","lastTransitionTime":"2026-01-26T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.929993 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.930049 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.930062 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.930093 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:58 crc kubenswrapper[4881]: I0126 12:36:58.930113 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:58Z","lastTransitionTime":"2026-01-26T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.032629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.032667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.032676 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.032689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.032697 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:59Z","lastTransitionTime":"2026-01-26T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.078208 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:41:10.094246933 +0000 UTC Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.081623 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:36:59 crc kubenswrapper[4881]: E0126 12:36:59.081813 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.136171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.136200 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.136209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.136225 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.136235 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:59Z","lastTransitionTime":"2026-01-26T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.239185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.239247 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.239267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.239290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.239307 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:59Z","lastTransitionTime":"2026-01-26T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.341606 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.341651 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.341662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.341681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.341695 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:59Z","lastTransitionTime":"2026-01-26T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.444962 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.445010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.445019 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.445034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.445044 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:59Z","lastTransitionTime":"2026-01-26T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.547212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.547246 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.547254 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.547270 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.547279 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:59Z","lastTransitionTime":"2026-01-26T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.648943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.648992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.649002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.649016 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.649024 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:59Z","lastTransitionTime":"2026-01-26T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.760815 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.760880 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.760896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.760919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.761014 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:59Z","lastTransitionTime":"2026-01-26T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.864174 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.864237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.864256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.864281 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.864298 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:59Z","lastTransitionTime":"2026-01-26T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.967471 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.967563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.967580 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.967606 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:36:59 crc kubenswrapper[4881]: I0126 12:36:59.967625 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:36:59Z","lastTransitionTime":"2026-01-26T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.070628 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.070733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.070760 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.070793 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.070830 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:37:00Z","lastTransitionTime":"2026-01-26T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.078694 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:20:11.850466753 +0000 UTC Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.082135 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.082162 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.082200 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:00 crc kubenswrapper[4881]: E0126 12:37:00.082281 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:00 crc kubenswrapper[4881]: E0126 12:37:00.082401 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:00 crc kubenswrapper[4881]: E0126 12:37:00.082485 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.173865 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.173916 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.173929 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.173945 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.173959 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:37:00Z","lastTransitionTime":"2026-01-26T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.277281 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.277342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.277364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.277401 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.277425 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:37:00Z","lastTransitionTime":"2026-01-26T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.380207 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.380261 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.380272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.380327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.380342 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:37:00Z","lastTransitionTime":"2026-01-26T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.484039 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.484113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.484129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.484154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.484170 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:37:00Z","lastTransitionTime":"2026-01-26T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.587251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.587299 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.587314 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.587333 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.587347 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:37:00Z","lastTransitionTime":"2026-01-26T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.690209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.690265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.690282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.690306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.690322 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:37:00Z","lastTransitionTime":"2026-01-26T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.793547 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.793637 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.793660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.793686 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.793705 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:37:00Z","lastTransitionTime":"2026-01-26T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.896729 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.896787 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.896798 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.896815 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.896829 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:37:00Z","lastTransitionTime":"2026-01-26T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.903109 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.903163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.903178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.903197 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.903212 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:37:00Z","lastTransitionTime":"2026-01-26T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.949445 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h"] Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.949847 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.952024 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.952140 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.952625 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 12:37:00 crc kubenswrapper[4881]: I0126 12:37:00.953326 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.011797 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/86b081d5-633d-4ad1-a619-43b4bae3cb0f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.011870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86b081d5-633d-4ad1-a619-43b4bae3cb0f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.011906 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86b081d5-633d-4ad1-a619-43b4bae3cb0f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.011957 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/86b081d5-633d-4ad1-a619-43b4bae3cb0f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.011991 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86b081d5-633d-4ad1-a619-43b4bae3cb0f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.079321 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 11:33:34.240890851 +0000 UTC Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.079410 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.081637 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:01 crc kubenswrapper[4881]: E0126 12:37:01.081796 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.092195 4881 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.113040 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86b081d5-633d-4ad1-a619-43b4bae3cb0f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.113160 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/86b081d5-633d-4ad1-a619-43b4bae3cb0f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.113209 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86b081d5-633d-4ad1-a619-43b4bae3cb0f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.113242 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86b081d5-633d-4ad1-a619-43b4bae3cb0f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.113276 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/86b081d5-633d-4ad1-a619-43b4bae3cb0f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.113359 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/86b081d5-633d-4ad1-a619-43b4bae3cb0f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.113377 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/86b081d5-633d-4ad1-a619-43b4bae3cb0f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.114920 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86b081d5-633d-4ad1-a619-43b4bae3cb0f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.120828 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86b081d5-633d-4ad1-a619-43b4bae3cb0f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.135796 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86b081d5-633d-4ad1-a619-43b4bae3cb0f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hcr8h\" (UID: \"86b081d5-633d-4ad1-a619-43b4bae3cb0f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.271804 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.620690 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" event={"ID":"86b081d5-633d-4ad1-a619-43b4bae3cb0f","Type":"ContainerStarted","Data":"2620c94ba5fd47f2d85fd042637e363c828d65f1c666c9069c73f87396f4f48f"} Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.621054 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" event={"ID":"86b081d5-633d-4ad1-a619-43b4bae3cb0f","Type":"ContainerStarted","Data":"9e327747875ef3b46dfeac92ff8d38e9a1fc0dc5de964d62d434b6d57cca8d92"} Jan 26 12:37:01 crc kubenswrapper[4881]: I0126 12:37:01.641027 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hcr8h" podStartSLOduration=74.641006959 podStartE2EDuration="1m14.641006959s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:01.637164019 +0000 UTC m=+94.116474065" watchObservedRunningTime="2026-01-26 12:37:01.641006959 +0000 UTC m=+94.120317005" Jan 26 12:37:02 crc kubenswrapper[4881]: I0126 12:37:02.082151 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:02 crc kubenswrapper[4881]: I0126 12:37:02.082118 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:02 crc kubenswrapper[4881]: E0126 12:37:02.082483 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:02 crc kubenswrapper[4881]: I0126 12:37:02.082585 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:02 crc kubenswrapper[4881]: E0126 12:37:02.082683 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:02 crc kubenswrapper[4881]: E0126 12:37:02.082814 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:03 crc kubenswrapper[4881]: I0126 12:37:03.082168 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:03 crc kubenswrapper[4881]: E0126 12:37:03.082457 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:04 crc kubenswrapper[4881]: I0126 12:37:04.081879 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:04 crc kubenswrapper[4881]: E0126 12:37:04.082348 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:04 crc kubenswrapper[4881]: I0126 12:37:04.082419 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:04 crc kubenswrapper[4881]: I0126 12:37:04.082553 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:04 crc kubenswrapper[4881]: E0126 12:37:04.082706 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:04 crc kubenswrapper[4881]: E0126 12:37:04.082832 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:05 crc kubenswrapper[4881]: I0126 12:37:05.082546 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:05 crc kubenswrapper[4881]: E0126 12:37:05.082750 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:06 crc kubenswrapper[4881]: I0126 12:37:06.081656 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:06 crc kubenswrapper[4881]: E0126 12:37:06.081812 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:06 crc kubenswrapper[4881]: I0126 12:37:06.082193 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:06 crc kubenswrapper[4881]: E0126 12:37:06.082552 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:06 crc kubenswrapper[4881]: I0126 12:37:06.082686 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:06 crc kubenswrapper[4881]: E0126 12:37:06.083502 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:07 crc kubenswrapper[4881]: I0126 12:37:07.077772 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:07 crc kubenswrapper[4881]: E0126 12:37:07.078135 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:37:07 crc kubenswrapper[4881]: E0126 12:37:07.078325 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs podName:640554c2-37e2-425f-b182-aa9b9d6fa4d8 nodeName:}" failed. No retries permitted until 2026-01-26 12:38:11.078291173 +0000 UTC m=+163.557601239 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs") pod "network-metrics-daemon-5zct6" (UID: "640554c2-37e2-425f-b182-aa9b9d6fa4d8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:37:07 crc kubenswrapper[4881]: I0126 12:37:07.082394 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:07 crc kubenswrapper[4881]: E0126 12:37:07.083244 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:07 crc kubenswrapper[4881]: I0126 12:37:07.083777 4881 scope.go:117] "RemoveContainer" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:37:07 crc kubenswrapper[4881]: E0126 12:37:07.084573 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" Jan 26 12:37:08 crc kubenswrapper[4881]: I0126 12:37:08.082415 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:08 crc kubenswrapper[4881]: I0126 12:37:08.082504 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:08 crc kubenswrapper[4881]: E0126 12:37:08.082662 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:08 crc kubenswrapper[4881]: I0126 12:37:08.082702 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:08 crc kubenswrapper[4881]: E0126 12:37:08.084696 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:08 crc kubenswrapper[4881]: E0126 12:37:08.084791 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:09 crc kubenswrapper[4881]: I0126 12:37:09.082110 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:09 crc kubenswrapper[4881]: E0126 12:37:09.082213 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:10 crc kubenswrapper[4881]: I0126 12:37:10.082117 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:10 crc kubenswrapper[4881]: I0126 12:37:10.082167 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:10 crc kubenswrapper[4881]: E0126 12:37:10.082272 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:10 crc kubenswrapper[4881]: I0126 12:37:10.082404 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:10 crc kubenswrapper[4881]: E0126 12:37:10.082448 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:10 crc kubenswrapper[4881]: E0126 12:37:10.082667 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:11 crc kubenswrapper[4881]: I0126 12:37:11.082286 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:11 crc kubenswrapper[4881]: E0126 12:37:11.082467 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:12 crc kubenswrapper[4881]: I0126 12:37:12.081502 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:12 crc kubenswrapper[4881]: I0126 12:37:12.081624 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:12 crc kubenswrapper[4881]: E0126 12:37:12.081730 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:12 crc kubenswrapper[4881]: I0126 12:37:12.081503 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:12 crc kubenswrapper[4881]: E0126 12:37:12.082042 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:12 crc kubenswrapper[4881]: E0126 12:37:12.082255 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:13 crc kubenswrapper[4881]: I0126 12:37:13.081563 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:13 crc kubenswrapper[4881]: E0126 12:37:13.081721 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:14 crc kubenswrapper[4881]: I0126 12:37:14.081736 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:14 crc kubenswrapper[4881]: I0126 12:37:14.081764 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:14 crc kubenswrapper[4881]: E0126 12:37:14.082133 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:14 crc kubenswrapper[4881]: I0126 12:37:14.082484 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:14 crc kubenswrapper[4881]: E0126 12:37:14.082691 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:14 crc kubenswrapper[4881]: E0126 12:37:14.083337 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:15 crc kubenswrapper[4881]: I0126 12:37:15.082468 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:15 crc kubenswrapper[4881]: E0126 12:37:15.082658 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:16 crc kubenswrapper[4881]: I0126 12:37:16.081761 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:16 crc kubenswrapper[4881]: I0126 12:37:16.081853 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:16 crc kubenswrapper[4881]: E0126 12:37:16.081931 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:16 crc kubenswrapper[4881]: I0126 12:37:16.081949 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:16 crc kubenswrapper[4881]: E0126 12:37:16.082051 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:16 crc kubenswrapper[4881]: E0126 12:37:16.082117 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:17 crc kubenswrapper[4881]: I0126 12:37:17.081474 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:17 crc kubenswrapper[4881]: E0126 12:37:17.081727 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:18 crc kubenswrapper[4881]: I0126 12:37:18.082685 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:18 crc kubenswrapper[4881]: I0126 12:37:18.082700 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:18 crc kubenswrapper[4881]: I0126 12:37:18.082816 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:18 crc kubenswrapper[4881]: E0126 12:37:18.083991 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:18 crc kubenswrapper[4881]: E0126 12:37:18.084093 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:18 crc kubenswrapper[4881]: E0126 12:37:18.084541 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:18 crc kubenswrapper[4881]: I0126 12:37:18.084922 4881 scope.go:117] "RemoveContainer" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:37:18 crc kubenswrapper[4881]: E0126 12:37:18.085120 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kbjm9_openshift-ovn-kubernetes(d272c950-9665-4b60-98a2-20c18d02d5a2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" Jan 26 12:37:19 crc kubenswrapper[4881]: I0126 12:37:19.081767 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:19 crc kubenswrapper[4881]: E0126 12:37:19.081933 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:20 crc kubenswrapper[4881]: I0126 12:37:20.082368 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:20 crc kubenswrapper[4881]: I0126 12:37:20.082427 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:20 crc kubenswrapper[4881]: I0126 12:37:20.082388 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:20 crc kubenswrapper[4881]: E0126 12:37:20.082605 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:20 crc kubenswrapper[4881]: E0126 12:37:20.082510 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:20 crc kubenswrapper[4881]: E0126 12:37:20.082809 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:21 crc kubenswrapper[4881]: I0126 12:37:21.081733 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:21 crc kubenswrapper[4881]: E0126 12:37:21.082268 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:21 crc kubenswrapper[4881]: I0126 12:37:21.690325 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csrkv_d24cc7d2-c2db-45ee-b405-fa56157f807c/kube-multus/1.log" Jan 26 12:37:21 crc kubenswrapper[4881]: I0126 12:37:21.691283 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csrkv_d24cc7d2-c2db-45ee-b405-fa56157f807c/kube-multus/0.log" Jan 26 12:37:21 crc kubenswrapper[4881]: I0126 12:37:21.691344 4881 generic.go:334] "Generic (PLEG): container finished" podID="d24cc7d2-c2db-45ee-b405-fa56157f807c" containerID="1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252" exitCode=1 Jan 26 12:37:21 crc kubenswrapper[4881]: I0126 12:37:21.691377 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csrkv" event={"ID":"d24cc7d2-c2db-45ee-b405-fa56157f807c","Type":"ContainerDied","Data":"1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252"} Jan 26 12:37:21 crc kubenswrapper[4881]: I0126 12:37:21.691417 4881 scope.go:117] "RemoveContainer" containerID="e19b81750777bd4a9bb8f7a14d883f0ceb5aa0fc4f1a0798c98616a7fc0cef81" Jan 26 12:37:21 crc kubenswrapper[4881]: I0126 12:37:21.692376 4881 scope.go:117] "RemoveContainer" containerID="1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252" Jan 26 12:37:21 crc kubenswrapper[4881]: E0126 12:37:21.692594 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-csrkv_openshift-multus(d24cc7d2-c2db-45ee-b405-fa56157f807c)\"" pod="openshift-multus/multus-csrkv" podUID="d24cc7d2-c2db-45ee-b405-fa56157f807c" Jan 26 12:37:22 crc kubenswrapper[4881]: I0126 12:37:22.082475 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:22 crc kubenswrapper[4881]: I0126 12:37:22.082601 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:22 crc kubenswrapper[4881]: E0126 12:37:22.082828 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:22 crc kubenswrapper[4881]: E0126 12:37:22.082991 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:22 crc kubenswrapper[4881]: I0126 12:37:22.083049 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:22 crc kubenswrapper[4881]: E0126 12:37:22.083680 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:22 crc kubenswrapper[4881]: I0126 12:37:22.695800 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csrkv_d24cc7d2-c2db-45ee-b405-fa56157f807c/kube-multus/1.log" Jan 26 12:37:23 crc kubenswrapper[4881]: I0126 12:37:23.082579 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:23 crc kubenswrapper[4881]: E0126 12:37:23.082796 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:24 crc kubenswrapper[4881]: I0126 12:37:24.082476 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:24 crc kubenswrapper[4881]: I0126 12:37:24.082556 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:24 crc kubenswrapper[4881]: E0126 12:37:24.082679 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:24 crc kubenswrapper[4881]: I0126 12:37:24.082747 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:24 crc kubenswrapper[4881]: E0126 12:37:24.082903 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:24 crc kubenswrapper[4881]: E0126 12:37:24.083137 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:25 crc kubenswrapper[4881]: I0126 12:37:25.082431 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:25 crc kubenswrapper[4881]: E0126 12:37:25.082615 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:26 crc kubenswrapper[4881]: I0126 12:37:26.081504 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:26 crc kubenswrapper[4881]: I0126 12:37:26.081612 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:26 crc kubenswrapper[4881]: E0126 12:37:26.081665 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:26 crc kubenswrapper[4881]: I0126 12:37:26.081689 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:26 crc kubenswrapper[4881]: E0126 12:37:26.081828 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:26 crc kubenswrapper[4881]: E0126 12:37:26.081952 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:27 crc kubenswrapper[4881]: I0126 12:37:27.081494 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:27 crc kubenswrapper[4881]: E0126 12:37:27.081692 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:28 crc kubenswrapper[4881]: E0126 12:37:28.031919 4881 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 12:37:28 crc kubenswrapper[4881]: I0126 12:37:28.081610 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:28 crc kubenswrapper[4881]: I0126 12:37:28.081610 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:28 crc kubenswrapper[4881]: E0126 12:37:28.082948 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:28 crc kubenswrapper[4881]: I0126 12:37:28.083173 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:28 crc kubenswrapper[4881]: E0126 12:37:28.083170 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:28 crc kubenswrapper[4881]: E0126 12:37:28.083766 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:28 crc kubenswrapper[4881]: E0126 12:37:28.223041 4881 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 12:37:29 crc kubenswrapper[4881]: I0126 12:37:29.082163 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:29 crc kubenswrapper[4881]: E0126 12:37:29.082342 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:30 crc kubenswrapper[4881]: I0126 12:37:30.081486 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:30 crc kubenswrapper[4881]: I0126 12:37:30.081495 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:30 crc kubenswrapper[4881]: E0126 12:37:30.081635 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:30 crc kubenswrapper[4881]: I0126 12:37:30.081667 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:30 crc kubenswrapper[4881]: E0126 12:37:30.081765 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:30 crc kubenswrapper[4881]: E0126 12:37:30.081858 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:31 crc kubenswrapper[4881]: I0126 12:37:31.082061 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:31 crc kubenswrapper[4881]: E0126 12:37:31.082252 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:32 crc kubenswrapper[4881]: I0126 12:37:32.082227 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:32 crc kubenswrapper[4881]: I0126 12:37:32.082314 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:32 crc kubenswrapper[4881]: E0126 12:37:32.082382 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:32 crc kubenswrapper[4881]: E0126 12:37:32.082467 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:32 crc kubenswrapper[4881]: I0126 12:37:32.082684 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:32 crc kubenswrapper[4881]: E0126 12:37:32.082760 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:33 crc kubenswrapper[4881]: I0126 12:37:33.082090 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:33 crc kubenswrapper[4881]: E0126 12:37:33.082422 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:33 crc kubenswrapper[4881]: I0126 12:37:33.083499 4881 scope.go:117] "RemoveContainer" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:37:33 crc kubenswrapper[4881]: E0126 12:37:33.224710 4881 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 12:37:33 crc kubenswrapper[4881]: I0126 12:37:33.736022 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/3.log" Jan 26 12:37:33 crc kubenswrapper[4881]: I0126 12:37:33.738914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerStarted","Data":"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88"} Jan 26 12:37:33 crc kubenswrapper[4881]: I0126 12:37:33.739541 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:37:33 crc kubenswrapper[4881]: I0126 12:37:33.793072 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podStartSLOduration=105.793046732 podStartE2EDuration="1m45.793046732s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:33.792255703 +0000 UTC m=+126.271565729" watchObservedRunningTime="2026-01-26 12:37:33.793046732 +0000 UTC m=+126.272356758" Jan 26 12:37:34 crc kubenswrapper[4881]: I0126 12:37:34.023928 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5zct6"] Jan 26 12:37:34 crc kubenswrapper[4881]: I0126 12:37:34.024095 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:34 crc kubenswrapper[4881]: E0126 12:37:34.024231 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:34 crc kubenswrapper[4881]: I0126 12:37:34.082546 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:34 crc kubenswrapper[4881]: I0126 12:37:34.082634 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:34 crc kubenswrapper[4881]: E0126 12:37:34.082694 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:34 crc kubenswrapper[4881]: E0126 12:37:34.082835 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:34 crc kubenswrapper[4881]: I0126 12:37:34.082928 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:34 crc kubenswrapper[4881]: E0126 12:37:34.083030 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:35 crc kubenswrapper[4881]: I0126 12:37:35.081756 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:35 crc kubenswrapper[4881]: E0126 12:37:35.081964 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:36 crc kubenswrapper[4881]: I0126 12:37:36.081725 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:36 crc kubenswrapper[4881]: I0126 12:37:36.081855 4881 scope.go:117] "RemoveContainer" containerID="1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252" Jan 26 12:37:36 crc kubenswrapper[4881]: I0126 12:37:36.081762 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:36 crc kubenswrapper[4881]: I0126 12:37:36.081732 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:36 crc kubenswrapper[4881]: E0126 12:37:36.082203 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:36 crc kubenswrapper[4881]: E0126 12:37:36.081910 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:36 crc kubenswrapper[4881]: E0126 12:37:36.082313 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:36 crc kubenswrapper[4881]: I0126 12:37:36.754313 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csrkv_d24cc7d2-c2db-45ee-b405-fa56157f807c/kube-multus/1.log" Jan 26 12:37:36 crc kubenswrapper[4881]: I0126 12:37:36.754724 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csrkv" event={"ID":"d24cc7d2-c2db-45ee-b405-fa56157f807c","Type":"ContainerStarted","Data":"3ececde13ab7d1af9a740578861d9b8810a114b31668f5c683af712e19dfac3f"} Jan 26 12:37:37 crc kubenswrapper[4881]: I0126 12:37:37.082464 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:37 crc kubenswrapper[4881]: E0126 12:37:37.082662 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:38 crc kubenswrapper[4881]: I0126 12:37:38.082221 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:38 crc kubenswrapper[4881]: I0126 12:37:38.082344 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:38 crc kubenswrapper[4881]: E0126 12:37:38.083988 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:38 crc kubenswrapper[4881]: I0126 12:37:38.084024 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:38 crc kubenswrapper[4881]: E0126 12:37:38.084186 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:38 crc kubenswrapper[4881]: E0126 12:37:38.084279 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:38 crc kubenswrapper[4881]: E0126 12:37:38.227977 4881 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 12:37:39 crc kubenswrapper[4881]: I0126 12:37:39.083005 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:39 crc kubenswrapper[4881]: E0126 12:37:39.083292 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:40 crc kubenswrapper[4881]: I0126 12:37:40.082259 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:40 crc kubenswrapper[4881]: I0126 12:37:40.082308 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:40 crc kubenswrapper[4881]: I0126 12:37:40.082388 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:40 crc kubenswrapper[4881]: E0126 12:37:40.082475 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:40 crc kubenswrapper[4881]: E0126 12:37:40.082701 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:40 crc kubenswrapper[4881]: E0126 12:37:40.082915 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:41 crc kubenswrapper[4881]: I0126 12:37:41.081486 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:41 crc kubenswrapper[4881]: E0126 12:37:41.081704 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:42 crc kubenswrapper[4881]: I0126 12:37:42.081599 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:42 crc kubenswrapper[4881]: I0126 12:37:42.081648 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:42 crc kubenswrapper[4881]: E0126 12:37:42.081743 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:37:42 crc kubenswrapper[4881]: I0126 12:37:42.081608 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:42 crc kubenswrapper[4881]: E0126 12:37:42.081844 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:37:42 crc kubenswrapper[4881]: E0126 12:37:42.082003 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:37:43 crc kubenswrapper[4881]: I0126 12:37:43.081793 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:43 crc kubenswrapper[4881]: E0126 12:37:43.082011 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5zct6" podUID="640554c2-37e2-425f-b182-aa9b9d6fa4d8" Jan 26 12:37:44 crc kubenswrapper[4881]: I0126 12:37:44.082196 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:44 crc kubenswrapper[4881]: I0126 12:37:44.082686 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:44 crc kubenswrapper[4881]: I0126 12:37:44.083261 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:44 crc kubenswrapper[4881]: I0126 12:37:44.085315 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 12:37:44 crc kubenswrapper[4881]: I0126 12:37:44.085369 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 12:37:44 crc kubenswrapper[4881]: I0126 12:37:44.086370 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 12:37:44 crc kubenswrapper[4881]: I0126 12:37:44.086700 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 12:37:45 crc kubenswrapper[4881]: I0126 12:37:45.081905 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:37:45 crc kubenswrapper[4881]: I0126 12:37:45.085379 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 12:37:45 crc kubenswrapper[4881]: I0126 12:37:45.087141 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.607449 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.654579 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lnbpt"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.655294 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.655773 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.656494 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.657667 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.658334 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tkq9v"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.658814 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.658961 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b8l9s"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.660151 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.661437 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.663767 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.664097 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.664315 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.664464 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.664328 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.666410 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.667278 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.668085 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2n5pc"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.668953 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.669620 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.669664 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.670298 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.670641 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.670829 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.671007 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.671561 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.671721 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.672002 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.672210 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.672375 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.673185 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.673330 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.673472 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.673866 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.674024 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.674288 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.674432 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.674578 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.674925 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.675069 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.675185 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.675417 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.675683 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.675801 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.675697 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.675973 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.677048 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.677160 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.677268 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.680129 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.685906 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.689922 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.690150 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.690291 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.690395 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.690426 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.690634 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.690814 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.690649 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.690655 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.691161 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.691216 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.694102 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tkq9v"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.694735 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.695401 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.713770 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.717348 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.720321 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.720383 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lnbpt"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.723233 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b8l9s"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.727655 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ncfp9"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.728697 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.728825 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.729616 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.730718 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.731472 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.733946 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.734616 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2n5pc"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.734731 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.741749 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.741823 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.742009 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.742202 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.742371 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.742546 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.742797 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.742948 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.743232 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.743378 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.750230 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.751312 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.751597 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.751832 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.752102 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.752280 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.752378 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.752429 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.752287 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.752563 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.752569 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.754076 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.754315 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.755956 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ncfp9"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.757795 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.759077 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766617 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-config\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766656 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef2ceed1-1060-4aff-a9f7-573f60a80771-etcd-client\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766699 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4575e882-c51a-4773-b719-bd25e8bbe760-serving-cert\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766723 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef2ceed1-1060-4aff-a9f7-573f60a80771-audit-dir\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766745 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766793 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-config\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766813 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e888d179-f59c-46d4-8c3b-c44aa10a248c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bpgdv\" (UID: \"e888d179-f59c-46d4-8c3b-c44aa10a248c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766878 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cbdefcc-18eb-4de2-a642-466fb488712f-serving-cert\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766900 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d993eee-dedd-4a12-bec6-8e63232b007d-encryption-config\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766939 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlpz\" (UniqueName: \"kubernetes.io/projected/e888d179-f59c-46d4-8c3b-c44aa10a248c-kube-api-access-9nlpz\") pod \"openshift-apiserver-operator-796bbdcf4f-bpgdv\" (UID: \"e888d179-f59c-46d4-8c3b-c44aa10a248c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766961 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2ceed1-1060-4aff-a9f7-573f60a80771-serving-cert\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.766981 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d993eee-dedd-4a12-bec6-8e63232b007d-etcd-client\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767030 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d993eee-dedd-4a12-bec6-8e63232b007d-serving-cert\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767048 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d993eee-dedd-4a12-bec6-8e63232b007d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767066 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj47q\" (UniqueName: \"kubernetes.io/projected/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-kube-api-access-mj47q\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767104 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-etcd-serving-ca\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767197 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d993eee-dedd-4a12-bec6-8e63232b007d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767248 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-image-import-ca\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767285 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-config\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767408 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-config\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767454 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ef2ceed1-1060-4aff-a9f7-573f60a80771-encryption-config\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwps\" (UniqueName: \"kubernetes.io/projected/6cbdefcc-18eb-4de2-a642-466fb488712f-kube-api-access-dpwps\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767589 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-images\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767634 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-client-ca\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767664 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5hd\" (UniqueName: \"kubernetes.io/projected/0d993eee-dedd-4a12-bec6-8e63232b007d-kube-api-access-zg5hd\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767706 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4575e882-c51a-4773-b719-bd25e8bbe760-service-ca-bundle\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767735 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767765 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zbfm\" (UniqueName: \"kubernetes.io/projected/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-kube-api-access-4zbfm\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767788 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-audit\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767813 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-serving-cert\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrtx\" (UniqueName: \"kubernetes.io/projected/ef2ceed1-1060-4aff-a9f7-573f60a80771-kube-api-access-lnrtx\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767893 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d993eee-dedd-4a12-bec6-8e63232b007d-audit-policies\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767919 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4575e882-c51a-4773-b719-bd25e8bbe760-config\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767939 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hw2t\" (UniqueName: \"kubernetes.io/projected/4575e882-c51a-4773-b719-bd25e8bbe760-kube-api-access-7hw2t\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767969 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e888d179-f59c-46d4-8c3b-c44aa10a248c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bpgdv\" (UID: \"e888d179-f59c-46d4-8c3b-c44aa10a248c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.767992 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ef2ceed1-1060-4aff-a9f7-573f60a80771-node-pullsecrets\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.768021 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d993eee-dedd-4a12-bec6-8e63232b007d-audit-dir\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.768052 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-client-ca\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.768075 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.768113 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4575e882-c51a-4773-b719-bd25e8bbe760-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868659 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99wc5\" (UniqueName: \"kubernetes.io/projected/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-kube-api-access-99wc5\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868705 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4575e882-c51a-4773-b719-bd25e8bbe760-serving-cert\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868729 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef2ceed1-1060-4aff-a9f7-573f60a80771-audit-dir\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868749 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868773 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868792 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-config\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868813 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cbdefcc-18eb-4de2-a642-466fb488712f-serving-cert\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868836 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d993eee-dedd-4a12-bec6-8e63232b007d-encryption-config\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868858 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e888d179-f59c-46d4-8c3b-c44aa10a248c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bpgdv\" (UID: \"e888d179-f59c-46d4-8c3b-c44aa10a248c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868875 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlpz\" (UniqueName: \"kubernetes.io/projected/e888d179-f59c-46d4-8c3b-c44aa10a248c-kube-api-access-9nlpz\") pod \"openshift-apiserver-operator-796bbdcf4f-bpgdv\" (UID: \"e888d179-f59c-46d4-8c3b-c44aa10a248c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868893 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2ceed1-1060-4aff-a9f7-573f60a80771-serving-cert\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868908 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d993eee-dedd-4a12-bec6-8e63232b007d-etcd-client\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868924 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-machine-approver-tls\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868947 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d993eee-dedd-4a12-bec6-8e63232b007d-serving-cert\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868963 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d993eee-dedd-4a12-bec6-8e63232b007d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868980 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj47q\" (UniqueName: \"kubernetes.io/projected/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-kube-api-access-mj47q\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.868999 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-auth-proxy-config\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869017 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-etcd-serving-ca\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869033 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/22e750c9-8cfd-42bd-853e-c814eeb6d274-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s5qng\" (UID: \"22e750c9-8cfd-42bd-853e-c814eeb6d274\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869059 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d993eee-dedd-4a12-bec6-8e63232b007d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869076 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-policies\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869093 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-image-import-ca\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869111 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-config\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869128 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvh7k\" (UniqueName: \"kubernetes.io/projected/22e750c9-8cfd-42bd-853e-c814eeb6d274-kube-api-access-nvh7k\") pod \"cluster-samples-operator-665b6dd947-s5qng\" (UID: \"22e750c9-8cfd-42bd-853e-c814eeb6d274\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869145 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869168 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869184 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869214 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-config\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869233 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ef2ceed1-1060-4aff-a9f7-573f60a80771-encryption-config\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869248 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869264 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869287 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpwps\" (UniqueName: \"kubernetes.io/projected/6cbdefcc-18eb-4de2-a642-466fb488712f-kube-api-access-dpwps\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869308 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-images\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869323 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-client-ca\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869349 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4575e882-c51a-4773-b719-bd25e8bbe760-service-ca-bundle\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869366 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg5hd\" (UniqueName: \"kubernetes.io/projected/0d993eee-dedd-4a12-bec6-8e63232b007d-kube-api-access-zg5hd\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869382 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869397 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zbfm\" (UniqueName: \"kubernetes.io/projected/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-kube-api-access-4zbfm\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869415 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-audit\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869448 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-serving-cert\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869467 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrtx\" (UniqueName: \"kubernetes.io/projected/ef2ceed1-1060-4aff-a9f7-573f60a80771-kube-api-access-lnrtx\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869829 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d993eee-dedd-4a12-bec6-8e63232b007d-audit-policies\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869854 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4575e882-c51a-4773-b719-bd25e8bbe760-config\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869870 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hw2t\" (UniqueName: \"kubernetes.io/projected/4575e882-c51a-4773-b719-bd25e8bbe760-kube-api-access-7hw2t\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869906 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e888d179-f59c-46d4-8c3b-c44aa10a248c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bpgdv\" (UID: \"e888d179-f59c-46d4-8c3b-c44aa10a248c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869923 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-config\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869938 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869956 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.869992 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ef2ceed1-1060-4aff-a9f7-573f60a80771-node-pullsecrets\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870010 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d993eee-dedd-4a12-bec6-8e63232b007d-audit-dir\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870027 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-client-ca\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870061 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870079 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4575e882-c51a-4773-b719-bd25e8bbe760-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870097 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870114 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhnx4\" (UniqueName: \"kubernetes.io/projected/810e7137-f09f-4050-bb0d-b15c23c57ed0-kube-api-access-jhnx4\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870147 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-dir\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870166 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-config\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870185 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef2ceed1-1060-4aff-a9f7-573f60a80771-etcd-client\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870219 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870891 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef2ceed1-1060-4aff-a9f7-573f60a80771-audit-dir\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.870951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-config\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.871951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4575e882-c51a-4773-b719-bd25e8bbe760-service-ca-bundle\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.872177 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4575e882-c51a-4773-b719-bd25e8bbe760-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.872235 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-client-ca\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.872389 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-images\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.873088 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-etcd-serving-ca\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.873120 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-client-ca\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.873328 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d993eee-dedd-4a12-bec6-8e63232b007d-audit-policies\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.873741 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d993eee-dedd-4a12-bec6-8e63232b007d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.873887 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e888d179-f59c-46d4-8c3b-c44aa10a248c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bpgdv\" (UID: \"e888d179-f59c-46d4-8c3b-c44aa10a248c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.873988 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d993eee-dedd-4a12-bec6-8e63232b007d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.875297 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-config\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.876048 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-config\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.876096 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d993eee-dedd-4a12-bec6-8e63232b007d-audit-dir\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.878811 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ef2ceed1-1060-4aff-a9f7-573f60a80771-node-pullsecrets\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.880102 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.881227 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-config\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.882783 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4575e882-c51a-4773-b719-bd25e8bbe760-config\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.882928 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef2ceed1-1060-4aff-a9f7-573f60a80771-etcd-client\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.883038 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e888d179-f59c-46d4-8c3b-c44aa10a248c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bpgdv\" (UID: \"e888d179-f59c-46d4-8c3b-c44aa10a248c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.883591 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cbdefcc-18eb-4de2-a642-466fb488712f-serving-cert\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.885032 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-serving-cert\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.885778 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-audit\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.892613 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-image-import-ca\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.892980 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef2ceed1-1060-4aff-a9f7-573f60a80771-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.898260 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d993eee-dedd-4a12-bec6-8e63232b007d-encryption-config\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.899986 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2ceed1-1060-4aff-a9f7-573f60a80771-serving-cert\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.904189 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.904537 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4575e882-c51a-4773-b719-bd25e8bbe760-serving-cert\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.904809 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ef2ceed1-1060-4aff-a9f7-573f60a80771-encryption-config\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.908166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj47q\" (UniqueName: \"kubernetes.io/projected/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-kube-api-access-mj47q\") pod \"controller-manager-879f6c89f-lnbpt\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.908339 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d993eee-dedd-4a12-bec6-8e63232b007d-serving-cert\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.909418 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlpz\" (UniqueName: \"kubernetes.io/projected/e888d179-f59c-46d4-8c3b-c44aa10a248c-kube-api-access-9nlpz\") pod \"openshift-apiserver-operator-796bbdcf4f-bpgdv\" (UID: \"e888d179-f59c-46d4-8c3b-c44aa10a248c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.912336 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hw2t\" (UniqueName: \"kubernetes.io/projected/4575e882-c51a-4773-b719-bd25e8bbe760-kube-api-access-7hw2t\") pod \"authentication-operator-69f744f599-tkq9v\" (UID: \"4575e882-c51a-4773-b719-bd25e8bbe760\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.912950 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d993eee-dedd-4a12-bec6-8e63232b007d-etcd-client\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.925556 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-b5l77"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.926340 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.938197 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrtx\" (UniqueName: \"kubernetes.io/projected/ef2ceed1-1060-4aff-a9f7-573f60a80771-kube-api-access-lnrtx\") pod \"apiserver-76f77b778f-b8l9s\" (UID: \"ef2ceed1-1060-4aff-a9f7-573f60a80771\") " pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.938590 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.939208 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.939298 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpwps\" (UniqueName: \"kubernetes.io/projected/6cbdefcc-18eb-4de2-a642-466fb488712f-kube-api-access-dpwps\") pod \"route-controller-manager-6576b87f9c-xvwh6\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.939330 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.940669 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.941849 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.942418 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.942681 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.942684 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-68j9g"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.943308 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-68j9g" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.945360 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg5hd\" (UniqueName: \"kubernetes.io/projected/0d993eee-dedd-4a12-bec6-8e63232b007d-kube-api-access-zg5hd\") pod \"apiserver-7bbb656c7d-wm95t\" (UID: \"0d993eee-dedd-4a12-bec6-8e63232b007d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.954198 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.954297 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zbfm\" (UniqueName: \"kubernetes.io/projected/1c3ab1d3-b6c8-46c7-8721-c8671d38ae03-kube-api-access-4zbfm\") pod \"machine-api-operator-5694c8668f-2n5pc\" (UID: \"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.954414 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.954552 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.954634 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.954674 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.955827 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.956328 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.961593 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cwb4s"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.962268 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.965908 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.966207 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.966500 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.968905 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2fsrg"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.970353 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972029 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972085 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99wc5\" (UniqueName: \"kubernetes.io/projected/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-kube-api-access-99wc5\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972108 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972131 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-machine-approver-tls\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972147 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-auth-proxy-config\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972170 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/22e750c9-8cfd-42bd-853e-c814eeb6d274-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s5qng\" (UID: \"22e750c9-8cfd-42bd-853e-c814eeb6d274\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972188 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-policies\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972210 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvh7k\" (UniqueName: \"kubernetes.io/projected/22e750c9-8cfd-42bd-853e-c814eeb6d274-kube-api-access-nvh7k\") pod \"cluster-samples-operator-665b6dd947-s5qng\" (UID: \"22e750c9-8cfd-42bd-853e-c814eeb6d274\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972227 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972253 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972269 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972295 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972310 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972336 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972363 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-config\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972382 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972400 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972434 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhnx4\" (UniqueName: \"kubernetes.io/projected/810e7137-f09f-4050-bb0d-b15c23c57ed0-kube-api-access-jhnx4\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972453 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-dir\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.972548 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-dir\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.975440 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-config\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.975864 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-c4vbw"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.976488 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.976864 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.976884 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.977104 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.979154 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.979792 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.979883 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-auth-proxy-config\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.981190 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.984397 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.985630 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.985679 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8"] Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.986245 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.986781 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.987602 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.987786 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.991555 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.991611 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.994694 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-policies\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:51 crc kubenswrapper[4881]: I0126 12:37:51.996340 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:51.999956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.001531 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.002272 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.003068 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.005995 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/22e750c9-8cfd-42bd-853e-c814eeb6d274-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s5qng\" (UID: \"22e750c9-8cfd-42bd-853e-c814eeb6d274\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.006435 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.013606 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2xt9h"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.014247 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.014614 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.014855 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.024450 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.025169 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-machine-approver-tls\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.026082 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.033171 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.064849 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dr6gf"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.065194 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.065646 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.065647 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.065772 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.066187 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.066455 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vnnxb"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.066973 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.067363 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.067424 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nlqwj"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.067730 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.067952 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.068784 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.079938 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.082304 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085108 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c68c037-053a-43f1-a2d6-0a4387610916-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pv5pl\" (UID: \"7c68c037-053a-43f1-a2d6-0a4387610916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085171 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07671cb3-5957-4f3f-9189-4e3e05d7c090-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fsrl8\" (UID: \"07671cb3-5957-4f3f-9189-4e3e05d7c090\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085197 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07671cb3-5957-4f3f-9189-4e3e05d7c090-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fsrl8\" (UID: \"07671cb3-5957-4f3f-9189-4e3e05d7c090\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xl7f\" (UniqueName: \"kubernetes.io/projected/de07a342-44f0-45cc-a461-5fd5a70e34d9-kube-api-access-9xl7f\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085257 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c68c037-053a-43f1-a2d6-0a4387610916-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pv5pl\" (UID: \"7c68c037-053a-43f1-a2d6-0a4387610916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085286 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-trusted-ca\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085310 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07671cb3-5957-4f3f-9189-4e3e05d7c090-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fsrl8\" (UID: \"07671cb3-5957-4f3f-9189-4e3e05d7c090\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085331 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-oauth-serving-cert\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085351 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f-serving-cert\") pod \"openshift-config-operator-7777fb866f-gtzx7\" (UID: \"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085383 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ql6j\" (UniqueName: \"kubernetes.io/projected/993b16a8-4172-41dc-90fc-7d420d0a12f2-kube-api-access-4ql6j\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085407 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-service-ca\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h676w\" (UniqueName: \"kubernetes.io/projected/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-kube-api-access-h676w\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085452 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtx89\" (UniqueName: \"kubernetes.io/projected/29d30a67-fb25-4c95-8f8a-a77b27cd695f-kube-api-access-xtx89\") pod \"kube-storage-version-migrator-operator-b67b599dd-gmbhc\" (UID: \"29d30a67-fb25-4c95-8f8a-a77b27cd695f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085473 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-config\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085504 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d30a67-fb25-4c95-8f8a-a77b27cd695f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gmbhc\" (UID: \"29d30a67-fb25-4c95-8f8a-a77b27cd695f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085586 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxg2z\" (UniqueName: \"kubernetes.io/projected/c72452bc-3cf5-4c8a-a133-2789adbaa573-kube-api-access-vxg2z\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085615 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnl6p\" (UniqueName: \"kubernetes.io/projected/7c68c037-053a-43f1-a2d6-0a4387610916-kube-api-access-mnl6p\") pod \"openshift-controller-manager-operator-756b6f6bc6-pv5pl\" (UID: \"7c68c037-053a-43f1-a2d6-0a4387610916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085640 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gtzx7\" (UID: \"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085676 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-metrics-certs\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085704 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9bdk\" (UniqueName: \"kubernetes.io/projected/30c09cc7-c747-494b-80ac-e4a780f65fb6-kube-api-access-r9bdk\") pod \"downloads-7954f5f757-68j9g\" (UID: \"30c09cc7-c747-494b-80ac-e4a780f65fb6\") " pod="openshift-console/downloads-7954f5f757-68j9g" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085725 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/993b16a8-4172-41dc-90fc-7d420d0a12f2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085747 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47fz\" (UniqueName: \"kubernetes.io/projected/21710e70-1118-4472-91e1-2c7c66e9fe75-kube-api-access-m47fz\") pod \"dns-default-b5l77\" (UID: \"21710e70-1118-4472-91e1-2c7c66e9fe75\") " pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085771 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnw54\" (UniqueName: \"kubernetes.io/projected/96eca703-29ae-4ec1-a961-e3303788da4f-kube-api-access-jnw54\") pod \"dns-operator-744455d44c-2xt9h\" (UID: \"96eca703-29ae-4ec1-a961-e3303788da4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085791 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-default-certificate\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085814 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc3ce0ab-d53f-4504-b1be-09cd3629c5ae-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mgcg9\" (UID: \"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085835 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d30a67-fb25-4c95-8f8a-a77b27cd695f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gmbhc\" (UID: \"29d30a67-fb25-4c95-8f8a-a77b27cd695f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/993b16a8-4172-41dc-90fc-7d420d0a12f2-metrics-tls\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085894 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c72452bc-3cf5-4c8a-a133-2789adbaa573-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085917 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/993b16a8-4172-41dc-90fc-7d420d0a12f2-trusted-ca\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085938 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-serving-cert\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085959 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-trusted-ca-bundle\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.085983 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-serving-cert\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.086004 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-stats-auth\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.086028 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3ce0ab-d53f-4504-b1be-09cd3629c5ae-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mgcg9\" (UID: \"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.086049 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-config\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.086070 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkhmn\" (UniqueName: \"kubernetes.io/projected/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-kube-api-access-dkhmn\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.086104 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqkq8\" (UniqueName: \"kubernetes.io/projected/0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f-kube-api-access-nqkq8\") pod \"openshift-config-operator-7777fb866f-gtzx7\" (UID: \"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.086125 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/21710e70-1118-4472-91e1-2c7c66e9fe75-metrics-tls\") pod \"dns-default-b5l77\" (UID: \"21710e70-1118-4472-91e1-2c7c66e9fe75\") " pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.086148 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-service-ca-bundle\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.086170 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc3ce0ab-d53f-4504-b1be-09cd3629c5ae-config\") pod \"kube-apiserver-operator-766d6c64bb-mgcg9\" (UID: \"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.087889 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-oauth-config\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.087996 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96eca703-29ae-4ec1-a961-e3303788da4f-metrics-tls\") pod \"dns-operator-744455d44c-2xt9h\" (UID: \"96eca703-29ae-4ec1-a961-e3303788da4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.088024 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c72452bc-3cf5-4c8a-a133-2789adbaa573-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.088083 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c72452bc-3cf5-4c8a-a133-2789adbaa573-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.088111 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21710e70-1118-4472-91e1-2c7c66e9fe75-config-volume\") pod \"dns-default-b5l77\" (UID: \"21710e70-1118-4472-91e1-2c7c66e9fe75\") " pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.090099 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.090683 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.096124 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.098851 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.101256 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.101405 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.101462 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.102250 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.102681 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drv9q"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.102850 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.103046 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.103073 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.103199 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.103960 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.104318 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-t87hc"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.104696 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.104711 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.104768 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t87hc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.104773 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dmhnk"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.105278 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.105813 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.107102 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k8g4d"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.108216 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.108676 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.109176 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.115847 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.117885 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.118889 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-972wv"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.119932 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.120357 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.119019 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.120785 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.132987 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.141374 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-68j9g"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.142487 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-b5l77"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.142748 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.156423 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.158257 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.158433 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.158860 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.159338 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2fsrg"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.169686 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.170751 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.172141 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.184460 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.188901 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc3ce0ab-d53f-4504-b1be-09cd3629c5ae-config\") pod \"kube-apiserver-operator-766d6c64bb-mgcg9\" (UID: \"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.188944 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-service-ca-bundle\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.188971 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f-cert\") pod \"ingress-canary-t87hc\" (UID: \"f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f\") " pod="openshift-ingress-canary/ingress-canary-t87hc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.188995 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-oauth-config\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189017 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c72452bc-3cf5-4c8a-a133-2789adbaa573-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189040 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/012f52fd-cf18-4590-98de-2d52c5384600-etcd-service-ca\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189060 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-config\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189080 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz8bj\" (UniqueName: \"kubernetes.io/projected/f94cc1c9-6219-48e2-8033-aecea365cacb-kube-api-access-hz8bj\") pod \"multus-admission-controller-857f4d67dd-nlqwj\" (UID: \"f94cc1c9-6219-48e2-8033-aecea365cacb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189104 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21710e70-1118-4472-91e1-2c7c66e9fe75-config-volume\") pod \"dns-default-b5l77\" (UID: \"21710e70-1118-4472-91e1-2c7c66e9fe75\") " pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189125 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07671cb3-5957-4f3f-9189-4e3e05d7c090-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fsrl8\" (UID: \"07671cb3-5957-4f3f-9189-4e3e05d7c090\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189146 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b679\" (UniqueName: \"kubernetes.io/projected/f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f-kube-api-access-4b679\") pod \"ingress-canary-t87hc\" (UID: \"f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f\") " pod="openshift-ingress-canary/ingress-canary-t87hc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189168 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj5bl\" (UniqueName: \"kubernetes.io/projected/d6b7645c-9920-4793-b6aa-9a6664cc93a0-kube-api-access-qj5bl\") pod \"control-plane-machine-set-operator-78cbb6b69f-p7bh2\" (UID: \"d6b7645c-9920-4793-b6aa-9a6664cc93a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189192 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9knfl\" (UniqueName: \"kubernetes.io/projected/ec9077c6-4f92-4925-8efa-8f6351967ae7-kube-api-access-9knfl\") pod \"migrator-59844c95c7-j7b4j\" (UID: \"ec9077c6-4f92-4925-8efa-8f6351967ae7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189213 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-proxy-tls\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189235 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189256 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f94cc1c9-6219-48e2-8033-aecea365cacb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nlqwj\" (UID: \"f94cc1c9-6219-48e2-8033-aecea365cacb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189280 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07671cb3-5957-4f3f-9189-4e3e05d7c090-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fsrl8\" (UID: \"07671cb3-5957-4f3f-9189-4e3e05d7c090\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189306 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189333 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189357 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6b7645c-9920-4793-b6aa-9a6664cc93a0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-p7bh2\" (UID: \"d6b7645c-9920-4793-b6aa-9a6664cc93a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.189382 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-certs\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.190299 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.194768 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dr6gf"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.195630 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21710e70-1118-4472-91e1-2c7c66e9fe75-config-volume\") pod \"dns-default-b5l77\" (UID: \"21710e70-1118-4472-91e1-2c7c66e9fe75\") " pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.195947 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2xt9h"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.196356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-oauth-config\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.197415 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.199042 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cwb4s"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.199362 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vnnxb"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.201535 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t87hc"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.203646 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.204657 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtmx8\" (UniqueName: \"kubernetes.io/projected/9b2b7357-ae8b-474d-942e-56c296ace395-kube-api-access-xtmx8\") pod \"machine-config-controller-84d6567774-vw8v9\" (UID: \"9b2b7357-ae8b-474d-942e-56c296ace395\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.204812 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h676w\" (UniqueName: \"kubernetes.io/projected/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-kube-api-access-h676w\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.204850 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtx89\" (UniqueName: \"kubernetes.io/projected/29d30a67-fb25-4c95-8f8a-a77b27cd695f-kube-api-access-xtx89\") pod \"kube-storage-version-migrator-operator-b67b599dd-gmbhc\" (UID: \"29d30a67-fb25-4c95-8f8a-a77b27cd695f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.204892 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d30a67-fb25-4c95-8f8a-a77b27cd695f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gmbhc\" (UID: \"29d30a67-fb25-4c95-8f8a-a77b27cd695f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.204916 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.204945 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnl6p\" (UniqueName: \"kubernetes.io/projected/7c68c037-053a-43f1-a2d6-0a4387610916-kube-api-access-mnl6p\") pod \"openshift-controller-manager-operator-756b6f6bc6-pv5pl\" (UID: \"7c68c037-053a-43f1-a2d6-0a4387610916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.204969 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012f52fd-cf18-4590-98de-2d52c5384600-config\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205062 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-metrics-certs\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205089 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b2b7357-ae8b-474d-942e-56c296ace395-proxy-tls\") pod \"machine-config-controller-84d6567774-vw8v9\" (UID: \"9b2b7357-ae8b-474d-942e-56c296ace395\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205111 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2dn\" (UniqueName: \"kubernetes.io/projected/7157505d-d18a-42a4-8037-96ad9a7825ce-kube-api-access-sz2dn\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205141 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bdk\" (UniqueName: \"kubernetes.io/projected/30c09cc7-c747-494b-80ac-e4a780f65fb6-kube-api-access-r9bdk\") pod \"downloads-7954f5f757-68j9g\" (UID: \"30c09cc7-c747-494b-80ac-e4a780f65fb6\") " pod="openshift-console/downloads-7954f5f757-68j9g" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205162 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-images\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205190 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/993b16a8-4172-41dc-90fc-7d420d0a12f2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205210 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m47fz\" (UniqueName: \"kubernetes.io/projected/21710e70-1118-4472-91e1-2c7c66e9fe75-kube-api-access-m47fz\") pod \"dns-default-b5l77\" (UID: \"21710e70-1118-4472-91e1-2c7c66e9fe75\") " pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205233 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfwc\" (UniqueName: \"kubernetes.io/projected/012f52fd-cf18-4590-98de-2d52c5384600-kube-api-access-hqfwc\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/993b16a8-4172-41dc-90fc-7d420d0a12f2-metrics-tls\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205281 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/993b16a8-4172-41dc-90fc-7d420d0a12f2-trusted-ca\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205304 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-serving-cert\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205326 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-serving-cert\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205347 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-trusted-ca-bundle\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205371 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvk4k\" (UniqueName: \"kubernetes.io/projected/517de9f8-a681-40a3-bd6b-8009ae963398-kube-api-access-lvk4k\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205399 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-config\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205421 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkhmn\" (UniqueName: \"kubernetes.io/projected/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-kube-api-access-dkhmn\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205443 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqkq8\" (UniqueName: \"kubernetes.io/projected/0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f-kube-api-access-nqkq8\") pod \"openshift-config-operator-7777fb866f-gtzx7\" (UID: \"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205465 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/21710e70-1118-4472-91e1-2c7c66e9fe75-metrics-tls\") pod \"dns-default-b5l77\" (UID: \"21710e70-1118-4472-91e1-2c7c66e9fe75\") " pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205488 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/012f52fd-cf18-4590-98de-2d52c5384600-serving-cert\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205533 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96eca703-29ae-4ec1-a961-e3303788da4f-metrics-tls\") pod \"dns-operator-744455d44c-2xt9h\" (UID: \"96eca703-29ae-4ec1-a961-e3303788da4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205575 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c72452bc-3cf5-4c8a-a133-2789adbaa573-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205605 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/517de9f8-a681-40a3-bd6b-8009ae963398-signing-key\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c68c037-053a-43f1-a2d6-0a4387610916-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pv5pl\" (UID: \"7c68c037-053a-43f1-a2d6-0a4387610916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205671 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07671cb3-5957-4f3f-9189-4e3e05d7c090-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fsrl8\" (UID: \"07671cb3-5957-4f3f-9189-4e3e05d7c090\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205732 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.205694 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xl7f\" (UniqueName: \"kubernetes.io/projected/de07a342-44f0-45cc-a461-5fd5a70e34d9-kube-api-access-9xl7f\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.211930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c68c037-053a-43f1-a2d6-0a4387610916-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pv5pl\" (UID: \"7c68c037-053a-43f1-a2d6-0a4387610916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.211993 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmld\" (UniqueName: \"kubernetes.io/projected/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-kube-api-access-dfmld\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212055 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-trusted-ca\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212090 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-oauth-serving-cert\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212124 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/012f52fd-cf18-4590-98de-2d52c5384600-etcd-client\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212181 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ql6j\" (UniqueName: \"kubernetes.io/projected/993b16a8-4172-41dc-90fc-7d420d0a12f2-kube-api-access-4ql6j\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212219 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f-serving-cert\") pod \"openshift-config-operator-7777fb866f-gtzx7\" (UID: \"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212259 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b2b7357-ae8b-474d-942e-56c296ace395-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vw8v9\" (UID: \"9b2b7357-ae8b-474d-942e-56c296ace395\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212299 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-service-ca\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212325 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/012f52fd-cf18-4590-98de-2d52c5384600-etcd-ca\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212359 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-config\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212399 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxg2z\" (UniqueName: \"kubernetes.io/projected/c72452bc-3cf5-4c8a-a133-2789adbaa573-kube-api-access-vxg2z\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212596 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gtzx7\" (UID: \"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212633 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/517de9f8-a681-40a3-bd6b-8009ae963398-signing-cabundle\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.212667 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.213265 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-node-bootstrap-token\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.213330 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc3ce0ab-d53f-4504-b1be-09cd3629c5ae-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mgcg9\" (UID: \"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.213648 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnw54\" (UniqueName: \"kubernetes.io/projected/96eca703-29ae-4ec1-a961-e3303788da4f-kube-api-access-jnw54\") pod \"dns-operator-744455d44c-2xt9h\" (UID: \"96eca703-29ae-4ec1-a961-e3303788da4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.213815 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-default-certificate\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.213850 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d30a67-fb25-4c95-8f8a-a77b27cd695f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gmbhc\" (UID: \"29d30a67-fb25-4c95-8f8a-a77b27cd695f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.213984 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c72452bc-3cf5-4c8a-a133-2789adbaa573-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.214018 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfx64\" (UniqueName: \"kubernetes.io/projected/3a9e8ad0-5902-4e1c-b000-86c3003673f4-kube-api-access-kfx64\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.214208 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3ce0ab-d53f-4504-b1be-09cd3629c5ae-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mgcg9\" (UID: \"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.214359 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-stats-auth\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.216386 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c68c037-053a-43f1-a2d6-0a4387610916-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pv5pl\" (UID: \"7c68c037-053a-43f1-a2d6-0a4387610916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.218152 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-oauth-serving-cert\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.222740 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.222786 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.223188 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-service-ca\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.223876 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gtzx7\" (UID: \"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.228407 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c72452bc-3cf5-4c8a-a133-2789adbaa573-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.228877 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-trusted-ca-bundle\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.228876 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c68c037-053a-43f1-a2d6-0a4387610916-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pv5pl\" (UID: \"7c68c037-053a-43f1-a2d6-0a4387610916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.229900 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c72452bc-3cf5-4c8a-a133-2789adbaa573-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.230229 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.230434 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drv9q"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.231351 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f-serving-cert\") pod \"openshift-config-operator-7777fb866f-gtzx7\" (UID: \"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.231702 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-serving-cert\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.231770 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.231862 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/21710e70-1118-4472-91e1-2c7c66e9fe75-metrics-tls\") pod \"dns-default-b5l77\" (UID: \"21710e70-1118-4472-91e1-2c7c66e9fe75\") " pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.233025 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.234373 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-config\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.234452 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nlqwj"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.235754 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.237021 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k8g4d"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.237339 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.238255 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.239901 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.241262 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-972wv"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.242627 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.252044 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-serving-cert\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.258815 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.292584 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.297488 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.298738 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-trusted-ca\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.316439 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b2b7357-ae8b-474d-942e-56c296ace395-proxy-tls\") pod \"machine-config-controller-84d6567774-vw8v9\" (UID: \"9b2b7357-ae8b-474d-942e-56c296ace395\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.316480 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2dn\" (UniqueName: \"kubernetes.io/projected/7157505d-d18a-42a4-8037-96ad9a7825ce-kube-api-access-sz2dn\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.316689 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-images\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.316713 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfwc\" (UniqueName: \"kubernetes.io/projected/012f52fd-cf18-4590-98de-2d52c5384600-kube-api-access-hqfwc\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.316898 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvk4k\" (UniqueName: \"kubernetes.io/projected/517de9f8-a681-40a3-bd6b-8009ae963398-kube-api-access-lvk4k\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.316936 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/012f52fd-cf18-4590-98de-2d52c5384600-serving-cert\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.316976 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/517de9f8-a681-40a3-bd6b-8009ae963398-signing-key\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317018 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmld\" (UniqueName: \"kubernetes.io/projected/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-kube-api-access-dfmld\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317033 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/012f52fd-cf18-4590-98de-2d52c5384600-etcd-client\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317065 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b2b7357-ae8b-474d-942e-56c296ace395-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vw8v9\" (UID: \"9b2b7357-ae8b-474d-942e-56c296ace395\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317088 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/012f52fd-cf18-4590-98de-2d52c5384600-etcd-ca\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317119 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/517de9f8-a681-40a3-bd6b-8009ae963398-signing-cabundle\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317143 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317165 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-node-bootstrap-token\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317210 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfx64\" (UniqueName: \"kubernetes.io/projected/3a9e8ad0-5902-4e1c-b000-86c3003673f4-kube-api-access-kfx64\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317259 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f-cert\") pod \"ingress-canary-t87hc\" (UID: \"f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f\") " pod="openshift-ingress-canary/ingress-canary-t87hc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317282 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/012f52fd-cf18-4590-98de-2d52c5384600-etcd-service-ca\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-config\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317316 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz8bj\" (UniqueName: \"kubernetes.io/projected/f94cc1c9-6219-48e2-8033-aecea365cacb-kube-api-access-hz8bj\") pod \"multus-admission-controller-857f4d67dd-nlqwj\" (UID: \"f94cc1c9-6219-48e2-8033-aecea365cacb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317339 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b679\" (UniqueName: \"kubernetes.io/projected/f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f-kube-api-access-4b679\") pod \"ingress-canary-t87hc\" (UID: \"f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f\") " pod="openshift-ingress-canary/ingress-canary-t87hc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317367 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj5bl\" (UniqueName: \"kubernetes.io/projected/d6b7645c-9920-4793-b6aa-9a6664cc93a0-kube-api-access-qj5bl\") pod \"control-plane-machine-set-operator-78cbb6b69f-p7bh2\" (UID: \"d6b7645c-9920-4793-b6aa-9a6664cc93a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317389 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9knfl\" (UniqueName: \"kubernetes.io/projected/ec9077c6-4f92-4925-8efa-8f6351967ae7-kube-api-access-9knfl\") pod \"migrator-59844c95c7-j7b4j\" (UID: \"ec9077c6-4f92-4925-8efa-8f6351967ae7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317408 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f94cc1c9-6219-48e2-8033-aecea365cacb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nlqwj\" (UID: \"f94cc1c9-6219-48e2-8033-aecea365cacb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-proxy-tls\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317441 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317463 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317480 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.317497 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6b7645c-9920-4793-b6aa-9a6664cc93a0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-p7bh2\" (UID: \"d6b7645c-9920-4793-b6aa-9a6664cc93a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.318992 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b2b7357-ae8b-474d-942e-56c296ace395-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vw8v9\" (UID: \"9b2b7357-ae8b-474d-942e-56c296ace395\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.321347 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-certs\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.321395 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtmx8\" (UniqueName: \"kubernetes.io/projected/9b2b7357-ae8b-474d-942e-56c296ace395-kube-api-access-xtmx8\") pod \"machine-config-controller-84d6567774-vw8v9\" (UID: \"9b2b7357-ae8b-474d-942e-56c296ace395\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.321441 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.321484 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012f52fd-cf18-4590-98de-2d52c5384600-config\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.322594 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.323394 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.326215 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-config\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.349378 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.355150 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lnbpt"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.357669 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.378152 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.388916 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tkq9v"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.398762 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.418427 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.427363 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3ce0ab-d53f-4504-b1be-09cd3629c5ae-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mgcg9\" (UID: \"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.444369 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.452176 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc3ce0ab-d53f-4504-b1be-09cd3629c5ae-config\") pod \"kube-apiserver-operator-766d6c64bb-mgcg9\" (UID: \"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.458480 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.466167 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-metrics-certs\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.478195 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.498289 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.509534 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-default-certificate\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.519286 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.529422 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-stats-auth\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.538234 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.541008 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-service-ca-bundle\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.557759 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.577199 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.587303 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.600479 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.618373 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.630995 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/993b16a8-4172-41dc-90fc-7d420d0a12f2-metrics-tls\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.643388 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.645898 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/993b16a8-4172-41dc-90fc-7d420d0a12f2-trusted-ca\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.657056 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.678724 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.699027 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2n5pc"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.701458 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.701672 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.705007 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b8l9s"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.711816 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07671cb3-5957-4f3f-9189-4e3e05d7c090-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fsrl8\" (UID: \"07671cb3-5957-4f3f-9189-4e3e05d7c090\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.719069 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.738237 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.746304 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07671cb3-5957-4f3f-9189-4e3e05d7c090-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fsrl8\" (UID: \"07671cb3-5957-4f3f-9189-4e3e05d7c090\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.781923 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t"] Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.781963 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv"] Jan 26 12:37:52 crc kubenswrapper[4881]: W0126 12:37:52.786704 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode888d179_f59c_46d4_8c3b_c44aa10a248c.slice/crio-b61584557a6907cf16ffe6e367dc6983248522b269d4cc739bf12b41cebc746a WatchSource:0}: Error finding container b61584557a6907cf16ffe6e367dc6983248522b269d4cc739bf12b41cebc746a: Status 404 returned error can't find the container with id b61584557a6907cf16ffe6e367dc6983248522b269d4cc739bf12b41cebc746a Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.791953 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhnx4\" (UniqueName: \"kubernetes.io/projected/810e7137-f09f-4050-bb0d-b15c23c57ed0-kube-api-access-jhnx4\") pod \"oauth-openshift-558db77b4-ncfp9\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.799128 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvh7k\" (UniqueName: \"kubernetes.io/projected/22e750c9-8cfd-42bd-853e-c814eeb6d274-kube-api-access-nvh7k\") pod \"cluster-samples-operator-665b6dd947-s5qng\" (UID: \"22e750c9-8cfd-42bd-853e-c814eeb6d274\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.810566 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" event={"ID":"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03","Type":"ContainerStarted","Data":"ba3354e4eac0f8905813cba1d4727d3621631c4a837f5ec10abcc442a171fc6d"} Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.811390 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" event={"ID":"ef2ceed1-1060-4aff-a9f7-573f60a80771","Type":"ContainerStarted","Data":"9d422cb227b3bd4381f1e9d6ed5769df1c3d7267b5bc4eb442f3606c203c8539"} Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.812238 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" event={"ID":"e888d179-f59c-46d4-8c3b-c44aa10a248c","Type":"ContainerStarted","Data":"b61584557a6907cf16ffe6e367dc6983248522b269d4cc739bf12b41cebc746a"} Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.813000 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" event={"ID":"0d993eee-dedd-4a12-bec6-8e63232b007d","Type":"ContainerStarted","Data":"88faf14c68e42c3f77aa7a5ddbf51d5495d6f34f3990d6870ff0df82b2b1e6f0"} Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.813768 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" event={"ID":"6cbdefcc-18eb-4de2-a642-466fb488712f","Type":"ContainerStarted","Data":"bace95764aff918136174e2489cd0cb0073eed5f80c23b9611c632f34b0ceb9e"} Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.814719 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" event={"ID":"4575e882-c51a-4773-b719-bd25e8bbe760","Type":"ContainerStarted","Data":"2e30c3b3328de4d63322ec3b33a1d2869e843d47571a07d943cdb73b2c1ce556"} Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.814749 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" event={"ID":"4575e882-c51a-4773-b719-bd25e8bbe760","Type":"ContainerStarted","Data":"3611bbc461ed6ef453bc26dfbe90847377c769caa2f37bf576d4648785c810f7"} Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.814752 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99wc5\" (UniqueName: \"kubernetes.io/projected/b7d1fd4b-10f6-40b5-8276-3a2f6a19105b-kube-api-access-99wc5\") pod \"machine-approver-56656f9798-rxhwr\" (UID: \"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.816134 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" event={"ID":"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8","Type":"ContainerStarted","Data":"9d22555f261365c3254aaaee7b65c47880485fc7a5368e1c8a3ebb019845e205"} Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.816175 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" event={"ID":"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8","Type":"ContainerStarted","Data":"5081f145beb9008d7e2131ae87b735fca910ab7dc2aba702dfcac53aa448d919"} Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.816402 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.817362 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.820128 4881 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lnbpt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.820202 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" podUID="7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.837435 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.858994 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.870239 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d30a67-fb25-4c95-8f8a-a77b27cd695f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gmbhc\" (UID: \"29d30a67-fb25-4c95-8f8a-a77b27cd695f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.882816 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.898121 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.908351 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d30a67-fb25-4c95-8f8a-a77b27cd695f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gmbhc\" (UID: \"29d30a67-fb25-4c95-8f8a-a77b27cd695f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.918065 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.938118 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.958298 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.964032 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96eca703-29ae-4ec1-a961-e3303788da4f-metrics-tls\") pod \"dns-operator-744455d44c-2xt9h\" (UID: \"96eca703-29ae-4ec1-a961-e3303788da4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" Jan 26 12:37:52 crc kubenswrapper[4881]: I0126 12:37:52.978220 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.005091 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.014942 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.017169 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.019685 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.040183 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.058850 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.076131 4881 request.go:700] Waited for 1.009715971s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&limit=500&resourceVersion=0 Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.078826 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.097693 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.118836 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.137940 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.144346 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b2b7357-ae8b-474d-942e-56c296ace395-proxy-tls\") pod \"machine-config-controller-84d6567774-vw8v9\" (UID: \"9b2b7357-ae8b-474d-942e-56c296ace395\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.159099 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.179903 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.197349 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.202060 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/012f52fd-cf18-4590-98de-2d52c5384600-etcd-client\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.220247 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.238702 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.248810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/012f52fd-cf18-4590-98de-2d52c5384600-serving-cert\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.257533 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.260352 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012f52fd-cf18-4590-98de-2d52c5384600-config\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.263338 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ncfp9"] Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.278356 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.280428 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/012f52fd-cf18-4590-98de-2d52c5384600-etcd-ca\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:53 crc kubenswrapper[4881]: W0126 12:37:53.281204 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod810e7137_f09f_4050_bb0d_b15c23c57ed0.slice/crio-6b7577ebd1770f16fb303ca492e97aeb2307c3ce11d333fa51d1a3f27f2a3604 WatchSource:0}: Error finding container 6b7577ebd1770f16fb303ca492e97aeb2307c3ce11d333fa51d1a3f27f2a3604: Status 404 returned error can't find the container with id 6b7577ebd1770f16fb303ca492e97aeb2307c3ce11d333fa51d1a3f27f2a3604 Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.287038 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng"] Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.299814 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.309276 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/012f52fd-cf18-4590-98de-2d52c5384600-etcd-service-ca\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.317168 4881 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.317291 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-images podName:1cd7d43a-b82c-423b-ac88-ac99d9b753aa nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.81725845 +0000 UTC m=+146.296568476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-images") pod "machine-config-operator-74547568cd-nqv96" (UID: "1cd7d43a-b82c-423b-ac88-ac99d9b753aa") : failed to sync configmap cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.317509 4881 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.317561 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-node-bootstrap-token podName:3a9e8ad0-5902-4e1c-b000-86c3003673f4 nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.817553887 +0000 UTC m=+146.296863913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-node-bootstrap-token") pod "machine-config-server-dmhnk" (UID: "3a9e8ad0-5902-4e1c-b000-86c3003673f4") : failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.317585 4881 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.317636 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/517de9f8-a681-40a3-bd6b-8009ae963398-signing-key podName:517de9f8-a681-40a3-bd6b-8009ae963398 nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.817607158 +0000 UTC m=+146.296917184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/517de9f8-a681-40a3-bd6b-8009ae963398-signing-key") pod "service-ca-9c57cc56f-k8g4d" (UID: "517de9f8-a681-40a3-bd6b-8009ae963398") : failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.317925 4881 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318019 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-config podName:d7d9099c-13e2-4b7e-817e-9917ca9b28fb nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.817992537 +0000 UTC m=+146.297302763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-config") pod "kube-controller-manager-operator-78b949d7b-27qqg" (UID: "d7d9099c-13e2-4b7e-817e-9917ca9b28fb") : failed to sync configmap cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318055 4881 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318099 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f-cert podName:f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.818088899 +0000 UTC m=+146.297399295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f-cert") pod "ingress-canary-t87hc" (UID: "f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f") : failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318128 4881 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318160 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f94cc1c9-6219-48e2-8033-aecea365cacb-webhook-certs podName:f94cc1c9-6219-48e2-8033-aecea365cacb nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.818151431 +0000 UTC m=+146.297461517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f94cc1c9-6219-48e2-8033-aecea365cacb-webhook-certs") pod "multus-admission-controller-857f4d67dd-nlqwj" (UID: "f94cc1c9-6219-48e2-8033-aecea365cacb") : failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318188 4881 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318202 4881 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318230 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/517de9f8-a681-40a3-bd6b-8009ae963398-signing-cabundle podName:517de9f8-a681-40a3-bd6b-8009ae963398 nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.818221472 +0000 UTC m=+146.297531498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/517de9f8-a681-40a3-bd6b-8009ae963398-signing-cabundle") pod "service-ca-9c57cc56f-k8g4d" (UID: "517de9f8-a681-40a3-bd6b-8009ae963398") : failed to sync configmap cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318254 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-proxy-tls podName:1cd7d43a-b82c-423b-ac88-ac99d9b753aa nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.818243853 +0000 UTC m=+146.297554089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-proxy-tls") pod "machine-config-operator-74547568cd-nqv96" (UID: "1cd7d43a-b82c-423b-ac88-ac99d9b753aa") : failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.318426 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318714 4881 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.318789 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-operator-metrics podName:7157505d-d18a-42a4-8037-96ad9a7825ce nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.818766976 +0000 UTC m=+146.298077212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-operator-metrics") pod "marketplace-operator-79b997595-drv9q" (UID: "7157505d-d18a-42a4-8037-96ad9a7825ce") : failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.319531 4881 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.319549 4881 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.319580 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-trusted-ca podName:7157505d-d18a-42a4-8037-96ad9a7825ce nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.819567154 +0000 UTC m=+146.298877370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-trusted-ca") pod "marketplace-operator-79b997595-drv9q" (UID: "7157505d-d18a-42a4-8037-96ad9a7825ce") : failed to sync configmap cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.319603 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-serving-cert podName:d7d9099c-13e2-4b7e-817e-9917ca9b28fb nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.819592935 +0000 UTC m=+146.298903181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-serving-cert") pod "kube-controller-manager-operator-78b949d7b-27qqg" (UID: "d7d9099c-13e2-4b7e-817e-9917ca9b28fb") : failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.319607 4881 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.319664 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6b7645c-9920-4793-b6aa-9a6664cc93a0-control-plane-machine-set-operator-tls podName:d6b7645c-9920-4793-b6aa-9a6664cc93a0 nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.819634756 +0000 UTC m=+146.298945002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/d6b7645c-9920-4793-b6aa-9a6664cc93a0-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-p7bh2" (UID: "d6b7645c-9920-4793-b6aa-9a6664cc93a0") : failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.322878 4881 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: E0126 12:37:53.322958 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-certs podName:3a9e8ad0-5902-4e1c-b000-86c3003673f4 nodeName:}" failed. No retries permitted until 2026-01-26 12:37:53.822936875 +0000 UTC m=+146.302246901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-certs") pod "machine-config-server-dmhnk" (UID: "3a9e8ad0-5902-4e1c-b000-86c3003673f4") : failed to sync secret cache: timed out waiting for the condition Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.337979 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.359175 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.377908 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.398059 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.418190 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.437254 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.458723 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.477702 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.497287 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.517921 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.537885 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.557170 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.577372 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.596828 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.617905 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.638063 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.663834 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.677326 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.698735 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.718987 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.738747 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.757408 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.793904 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.798208 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.820112 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.832888 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" event={"ID":"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b","Type":"ContainerStarted","Data":"c67f57a978fdf027a68abd9f24b0d8f0cf721767bb85520a74aabd92a6efe9bf"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.833157 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" event={"ID":"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b","Type":"ContainerStarted","Data":"cea18a9b76b7373adf611a506556e218c4e2a5481571f873b60d13add3cf71d5"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.833174 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" event={"ID":"b7d1fd4b-10f6-40b5-8276-3a2f6a19105b","Type":"ContainerStarted","Data":"ae732752c344cbcb4110ff321cdc60458583f50d68fe4665bee5308ebe5da56a"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.835314 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" event={"ID":"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03","Type":"ContainerStarted","Data":"989f3f4bea24cf537d45204be8e8bdcb423f41e6d001f7540f02626b8d6d67a0"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.835371 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" event={"ID":"1c3ab1d3-b6c8-46c7-8721-c8671d38ae03","Type":"ContainerStarted","Data":"b60af51119e26f0fb69304c4b57501a8f204bb0f1ea7c9c36a933994a02555d3"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.837298 4881 generic.go:334] "Generic (PLEG): container finished" podID="ef2ceed1-1060-4aff-a9f7-573f60a80771" containerID="e58e4a38a62c605716624c650924a2f6983f3a9e091179769fbcc2cde86b5914" exitCode=0 Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.837493 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.837724 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" event={"ID":"ef2ceed1-1060-4aff-a9f7-573f60a80771","Type":"ContainerDied","Data":"e58e4a38a62c605716624c650924a2f6983f3a9e091179769fbcc2cde86b5914"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.840164 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" event={"ID":"e888d179-f59c-46d4-8c3b-c44aa10a248c","Type":"ContainerStarted","Data":"acdb0fad78db525883974b1e829b92df45492f7d59c190f634eda97673dc9d85"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.844812 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" event={"ID":"22e750c9-8cfd-42bd-853e-c814eeb6d274","Type":"ContainerStarted","Data":"f1ba826687ec945dcb099a0a66b7cb07d5d2c5454d504e82f4cddf85ff0af32c"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.844880 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" event={"ID":"22e750c9-8cfd-42bd-853e-c814eeb6d274","Type":"ContainerStarted","Data":"63f6ef796f270871b9d2b05d6c4c94bf3e08a51f6c855b3a696e25753ef4bff4"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.844898 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" event={"ID":"22e750c9-8cfd-42bd-853e-c814eeb6d274","Type":"ContainerStarted","Data":"5a8b67c1a117eae75b81d9deea51d4a6ee5d30271ea9855904638028b33febe1"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.847765 4881 generic.go:334] "Generic (PLEG): container finished" podID="0d993eee-dedd-4a12-bec6-8e63232b007d" containerID="a225b8ab4e5f2b9ff3a220570853045574dbd5d860bbcfcd8cbea4dddd0c9545" exitCode=0 Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.847856 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" event={"ID":"0d993eee-dedd-4a12-bec6-8e63232b007d","Type":"ContainerDied","Data":"a225b8ab4e5f2b9ff3a220570853045574dbd5d860bbcfcd8cbea4dddd0c9545"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.851303 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" event={"ID":"6cbdefcc-18eb-4de2-a642-466fb488712f","Type":"ContainerStarted","Data":"0bd7b41902cef1ceb7762cc5b5047bafde653c4ebb721990f204dc39579d16a0"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.851641 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.854869 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/517de9f8-a681-40a3-bd6b-8009ae963398-signing-cabundle\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.854931 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.854962 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-node-bootstrap-token\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.855018 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f-cert\") pod \"ingress-canary-t87hc\" (UID: \"f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f\") " pod="openshift-ingress-canary/ingress-canary-t87hc" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.855046 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-config\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.855110 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-proxy-tls\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.855130 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.855153 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f94cc1c9-6219-48e2-8033-aecea365cacb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nlqwj\" (UID: \"f94cc1c9-6219-48e2-8033-aecea365cacb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.855195 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.855228 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6b7645c-9920-4793-b6aa-9a6664cc93a0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-p7bh2\" (UID: \"d6b7645c-9920-4793-b6aa-9a6664cc93a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.855255 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-certs\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.855338 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-images\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.855476 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/517de9f8-a681-40a3-bd6b-8009ae963398-signing-key\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.857453 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.857658 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" event={"ID":"810e7137-f09f-4050-bb0d-b15c23c57ed0","Type":"ContainerStarted","Data":"cc08cf80492430a8bba56fe74d858c50bd26db39ba8a171a6d582ef8dfbb2a88"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.857734 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" event={"ID":"810e7137-f09f-4050-bb0d-b15c23c57ed0","Type":"ContainerStarted","Data":"6b7577ebd1770f16fb303ca492e97aeb2307c3ce11d333fa51d1a3f27f2a3604"} Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.858335 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-config\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.858456 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-images\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.858631 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.858738 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.864537 4881 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ncfp9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.864585 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" podUID="810e7137-f09f-4050-bb0d-b15c23c57ed0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.864683 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.865872 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-proxy-tls\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.865872 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.868578 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.872150 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f94cc1c9-6219-48e2-8033-aecea365cacb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nlqwj\" (UID: \"f94cc1c9-6219-48e2-8033-aecea365cacb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.881491 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.883718 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.893971 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f-cert\") pod \"ingress-canary-t87hc\" (UID: \"f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f\") " pod="openshift-ingress-canary/ingress-canary-t87hc" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.912354 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.925272 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.933367 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-certs\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.938178 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.958668 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.982873 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.984450 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a9e8ad0-5902-4e1c-b000-86c3003673f4-node-bootstrap-token\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:53 crc kubenswrapper[4881]: I0126 12:37:53.999609 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.018491 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.033276 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/517de9f8-a681-40a3-bd6b-8009ae963398-signing-key\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.041764 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.047779 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/517de9f8-a681-40a3-bd6b-8009ae963398-signing-cabundle\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.058797 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.059039 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:54 crc kubenswrapper[4881]: E0126 12:37:54.059178 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:39:56.059155257 +0000 UTC m=+268.538465283 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.059645 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.059765 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.060436 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.060732 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.061920 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.063195 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.063189 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.069022 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.077878 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.094183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6b7645c-9920-4793-b6aa-9a6664cc93a0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-p7bh2\" (UID: \"d6b7645c-9920-4793-b6aa-9a6664cc93a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.096740 4881 request.go:700] Waited for 1.987356631s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&limit=500&resourceVersion=0 Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.100823 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.138413 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.159154 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.180905 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.199920 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.225075 4881 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.242068 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.259297 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.290958 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07671cb3-5957-4f3f-9189-4e3e05d7c090-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fsrl8\" (UID: \"07671cb3-5957-4f3f-9189-4e3e05d7c090\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.308089 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.322981 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h676w\" (UniqueName: \"kubernetes.io/projected/1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da-kube-api-access-h676w\") pod \"console-operator-58897d9998-2fsrg\" (UID: \"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da\") " pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.325689 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.338578 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.345453 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ql6j\" (UniqueName: \"kubernetes.io/projected/993b16a8-4172-41dc-90fc-7d420d0a12f2-kube-api-access-4ql6j\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.357369 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqkq8\" (UniqueName: \"kubernetes.io/projected/0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f-kube-api-access-nqkq8\") pod \"openshift-config-operator-7777fb866f-gtzx7\" (UID: \"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.372620 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c72452bc-3cf5-4c8a-a133-2789adbaa573-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.397230 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtx89\" (UniqueName: \"kubernetes.io/projected/29d30a67-fb25-4c95-8f8a-a77b27cd695f-kube-api-access-xtx89\") pod \"kube-storage-version-migrator-operator-b67b599dd-gmbhc\" (UID: \"29d30a67-fb25-4c95-8f8a-a77b27cd695f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.411762 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.416944 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnl6p\" (UniqueName: \"kubernetes.io/projected/7c68c037-053a-43f1-a2d6-0a4387610916-kube-api-access-mnl6p\") pod \"openshift-controller-manager-operator-756b6f6bc6-pv5pl\" (UID: \"7c68c037-053a-43f1-a2d6-0a4387610916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.418225 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.436349 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9bdk\" (UniqueName: \"kubernetes.io/projected/30c09cc7-c747-494b-80ac-e4a780f65fb6-kube-api-access-r9bdk\") pod \"downloads-7954f5f757-68j9g\" (UID: \"30c09cc7-c747-494b-80ac-e4a780f65fb6\") " pod="openshift-console/downloads-7954f5f757-68j9g" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.468981 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/993b16a8-4172-41dc-90fc-7d420d0a12f2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9cpqm\" (UID: \"993b16a8-4172-41dc-90fc-7d420d0a12f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.490178 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.495331 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m47fz\" (UniqueName: \"kubernetes.io/projected/21710e70-1118-4472-91e1-2c7c66e9fe75-kube-api-access-m47fz\") pod \"dns-default-b5l77\" (UID: \"21710e70-1118-4472-91e1-2c7c66e9fe75\") " pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.501254 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxg2z\" (UniqueName: \"kubernetes.io/projected/c72452bc-3cf5-4c8a-a133-2789adbaa573-kube-api-access-vxg2z\") pod \"cluster-image-registry-operator-dc59b4c8b-cfwgh\" (UID: \"c72452bc-3cf5-4c8a-a133-2789adbaa573\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.515872 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-68j9g" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.523883 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc3ce0ab-d53f-4504-b1be-09cd3629c5ae-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mgcg9\" (UID: \"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.524230 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.569528 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnw54\" (UniqueName: \"kubernetes.io/projected/96eca703-29ae-4ec1-a961-e3303788da4f-kube-api-access-jnw54\") pod \"dns-operator-744455d44c-2xt9h\" (UID: \"96eca703-29ae-4ec1-a961-e3303788da4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.583933 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkhmn\" (UniqueName: \"kubernetes.io/projected/bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c-kube-api-access-dkhmn\") pod \"router-default-5444994796-c4vbw\" (UID: \"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c\") " pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.584208 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.599748 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xl7f\" (UniqueName: \"kubernetes.io/projected/de07a342-44f0-45cc-a461-5fd5a70e34d9-kube-api-access-9xl7f\") pod \"console-f9d7485db-cwb4s\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.609905 4881 csr.go:261] certificate signing request csr-hlwwp is approved, waiting to be issued Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.623917 4881 csr.go:257] certificate signing request csr-hlwwp is issued Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.624560 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2dn\" (UniqueName: \"kubernetes.io/projected/7157505d-d18a-42a4-8037-96ad9a7825ce-kube-api-access-sz2dn\") pod \"marketplace-operator-79b997595-drv9q\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.628927 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfwc\" (UniqueName: \"kubernetes.io/projected/012f52fd-cf18-4590-98de-2d52c5384600-kube-api-access-hqfwc\") pod \"etcd-operator-b45778765-vnnxb\" (UID: \"012f52fd-cf18-4590-98de-2d52c5384600\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.655125 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.659945 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvk4k\" (UniqueName: \"kubernetes.io/projected/517de9f8-a681-40a3-bd6b-8009ae963398-kube-api-access-lvk4k\") pod \"service-ca-9c57cc56f-k8g4d\" (UID: \"517de9f8-a681-40a3-bd6b-8009ae963398\") " pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.676641 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmld\" (UniqueName: \"kubernetes.io/projected/1cd7d43a-b82c-423b-ac88-ac99d9b753aa-kube-api-access-dfmld\") pod \"machine-config-operator-74547568cd-nqv96\" (UID: \"1cd7d43a-b82c-423b-ac88-ac99d9b753aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.682134 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-b5l77" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.683710 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.692891 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.697479 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.698679 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj5bl\" (UniqueName: \"kubernetes.io/projected/d6b7645c-9920-4793-b6aa-9a6664cc93a0-kube-api-access-qj5bl\") pod \"control-plane-machine-set-operator-78cbb6b69f-p7bh2\" (UID: \"d6b7645c-9920-4793-b6aa-9a6664cc93a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.714376 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfx64\" (UniqueName: \"kubernetes.io/projected/3a9e8ad0-5902-4e1c-b000-86c3003673f4-kube-api-access-kfx64\") pod \"machine-config-server-dmhnk\" (UID: \"3a9e8ad0-5902-4e1c-b000-86c3003673f4\") " pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.722783 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dmhnk" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.726562 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.726988 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.732917 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz8bj\" (UniqueName: \"kubernetes.io/projected/f94cc1c9-6219-48e2-8033-aecea365cacb-kube-api-access-hz8bj\") pod \"multus-admission-controller-857f4d67dd-nlqwj\" (UID: \"f94cc1c9-6219-48e2-8033-aecea365cacb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.733164 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.738357 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b679\" (UniqueName: \"kubernetes.io/projected/f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f-kube-api-access-4b679\") pod \"ingress-canary-t87hc\" (UID: \"f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f\") " pod="openshift-ingress-canary/ingress-canary-t87hc" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.738496 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.775231 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9knfl\" (UniqueName: \"kubernetes.io/projected/ec9077c6-4f92-4925-8efa-8f6351967ae7-kube-api-access-9knfl\") pod \"migrator-59844c95c7-j7b4j\" (UID: \"ec9077c6-4f92-4925-8efa-8f6351967ae7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.780987 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d9099c-13e2-4b7e-817e-9917ca9b28fb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-27qqg\" (UID: \"d7d9099c-13e2-4b7e-817e-9917ca9b28fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.791866 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.791910 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.806694 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtmx8\" (UniqueName: \"kubernetes.io/projected/9b2b7357-ae8b-474d-942e-56c296ace395-kube-api-access-xtmx8\") pod \"machine-config-controller-84d6567774-vw8v9\" (UID: \"9b2b7357-ae8b-474d-942e-56c296ace395\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.852566 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.876102 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8"] Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.900501 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tsq\" (UniqueName: \"kubernetes.io/projected/72d21973-62df-4217-880d-04c600804b8d-kube-api-access-s5tsq\") pod \"package-server-manager-789f6589d5-dk6pz\" (UID: \"72d21973-62df-4217-880d-04c600804b8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901676 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhvk\" (UniqueName: \"kubernetes.io/projected/9df00c6a-36e4-454c-9bd0-bf7be360fedf-kube-api-access-nvhvk\") pod \"catalog-operator-68c6474976-xbdxs\" (UID: \"9df00c6a-36e4-454c-9bd0-bf7be360fedf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901723 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-certificates\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901752 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7742150d-7cf8-487e-a375-a39ce7caa256-srv-cert\") pod \"olm-operator-6b444d44fb-2vlbm\" (UID: \"7742150d-7cf8-487e-a375-a39ce7caa256\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901781 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d1e9ff-e409-4d1c-a6ba-a795e9926379-config\") pod \"service-ca-operator-777779d784-tqtg4\" (UID: \"40d1e9ff-e409-4d1c-a6ba-a795e9926379\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901826 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/72d21973-62df-4217-880d-04c600804b8d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dk6pz\" (UID: \"72d21973-62df-4217-880d-04c600804b8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901843 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9df00c6a-36e4-454c-9bd0-bf7be360fedf-srv-cert\") pod \"catalog-operator-68c6474976-xbdxs\" (UID: \"9df00c6a-36e4-454c-9bd0-bf7be360fedf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901876 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b92eec64-c286-4244-9e62-a5cd7ab680ae-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901895 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9df00c6a-36e4-454c-9bd0-bf7be360fedf-profile-collector-cert\") pod \"catalog-operator-68c6474976-xbdxs\" (UID: \"9df00c6a-36e4-454c-9bd0-bf7be360fedf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901931 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7742150d-7cf8-487e-a375-a39ce7caa256-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2vlbm\" (UID: \"7742150d-7cf8-487e-a375-a39ce7caa256\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901973 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-bound-sa-token\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.901996 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phz6c\" (UniqueName: \"kubernetes.io/projected/7742150d-7cf8-487e-a375-a39ce7caa256-kube-api-access-phz6c\") pod \"olm-operator-6b444d44fb-2vlbm\" (UID: \"7742150d-7cf8-487e-a375-a39ce7caa256\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.902040 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.902073 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s5xb\" (UniqueName: \"kubernetes.io/projected/40d1e9ff-e409-4d1c-a6ba-a795e9926379-kube-api-access-2s5xb\") pod \"service-ca-operator-777779d784-tqtg4\" (UID: \"40d1e9ff-e409-4d1c-a6ba-a795e9926379\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.902115 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-tls\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.902131 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9nzc\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-kube-api-access-w9nzc\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.902156 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40d1e9ff-e409-4d1c-a6ba-a795e9926379-serving-cert\") pod \"service-ca-operator-777779d784-tqtg4\" (UID: \"40d1e9ff-e409-4d1c-a6ba-a795e9926379\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.902172 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b92eec64-c286-4244-9e62-a5cd7ab680ae-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.902186 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-trusted-ca\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:54 crc kubenswrapper[4881]: E0126 12:37:54.905000 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:55.404983636 +0000 UTC m=+147.884293662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.920157 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f400efc15084b32952b77891bd2e538f460c2ee7a9a208b34544e34b50bea482"} Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.922855 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.924905 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"675da9ba2144bfb64e80859064fd9094274a1fe74a4bbd7b3fed41d96c86bd4e"} Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.942806 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" event={"ID":"0d993eee-dedd-4a12-bec6-8e63232b007d","Type":"ContainerStarted","Data":"5c16ba644d3a022b741331548b559d9d3bfe4bb14ca7383280c54df10e683dc4"} Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.958716 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.961745 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" event={"ID":"ef2ceed1-1060-4aff-a9f7-573f60a80771","Type":"ContainerStarted","Data":"7cfb0880530efe55981fdbcc811702b8dd72d0c5e340c0a70155f217902a1feb"} Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.961820 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" event={"ID":"ef2ceed1-1060-4aff-a9f7-573f60a80771","Type":"ContainerStarted","Data":"7189ded44adda30b1d02378fa92798f04d16e8d173067ff9387c6752960b737c"} Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.969736 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" Jan 26 12:37:54 crc kubenswrapper[4881]: I0126 12:37:54.975003 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006048 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006347 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-bound-sa-token\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006370 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phz6c\" (UniqueName: \"kubernetes.io/projected/7742150d-7cf8-487e-a375-a39ce7caa256-kube-api-access-phz6c\") pod \"olm-operator-6b444d44fb-2vlbm\" (UID: \"7742150d-7cf8-487e-a375-a39ce7caa256\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006509 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-socket-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006573 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s5xb\" (UniqueName: \"kubernetes.io/projected/40d1e9ff-e409-4d1c-a6ba-a795e9926379-kube-api-access-2s5xb\") pod \"service-ca-operator-777779d784-tqtg4\" (UID: \"40d1e9ff-e409-4d1c-a6ba-a795e9926379\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006618 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bf4018e-9078-4293-8f8e-f6ab7567943a-config-volume\") pod \"collect-profiles-29490510-6h9v5\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006674 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62f5dd6c-c6a2-4e70-ba5b-727c20098526-apiservice-cert\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006750 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-tls\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006766 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9nzc\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-kube-api-access-w9nzc\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006782 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62f5dd6c-c6a2-4e70-ba5b-727c20098526-webhook-cert\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006832 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m5l7\" (UniqueName: \"kubernetes.io/projected/62f5dd6c-c6a2-4e70-ba5b-727c20098526-kube-api-access-4m5l7\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006890 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40d1e9ff-e409-4d1c-a6ba-a795e9926379-serving-cert\") pod \"service-ca-operator-777779d784-tqtg4\" (UID: \"40d1e9ff-e409-4d1c-a6ba-a795e9926379\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006911 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8mc\" (UniqueName: \"kubernetes.io/projected/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-kube-api-access-jj8mc\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006940 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b92eec64-c286-4244-9e62-a5cd7ab680ae-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.006976 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-trusted-ca\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007046 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5tsq\" (UniqueName: \"kubernetes.io/projected/72d21973-62df-4217-880d-04c600804b8d-kube-api-access-s5tsq\") pod \"package-server-manager-789f6589d5-dk6pz\" (UID: \"72d21973-62df-4217-880d-04c600804b8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007132 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvhvk\" (UniqueName: \"kubernetes.io/projected/9df00c6a-36e4-454c-9bd0-bf7be360fedf-kube-api-access-nvhvk\") pod \"catalog-operator-68c6474976-xbdxs\" (UID: \"9df00c6a-36e4-454c-9bd0-bf7be360fedf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007175 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-mountpoint-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007243 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc866\" (UniqueName: \"kubernetes.io/projected/3bf4018e-9078-4293-8f8e-f6ab7567943a-kube-api-access-zc866\") pod \"collect-profiles-29490510-6h9v5\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007278 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-certificates\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007301 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-plugins-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007355 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-registration-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007381 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7742150d-7cf8-487e-a375-a39ce7caa256-srv-cert\") pod \"olm-operator-6b444d44fb-2vlbm\" (UID: \"7742150d-7cf8-487e-a375-a39ce7caa256\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d1e9ff-e409-4d1c-a6ba-a795e9926379-config\") pod \"service-ca-operator-777779d784-tqtg4\" (UID: \"40d1e9ff-e409-4d1c-a6ba-a795e9926379\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007504 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/62f5dd6c-c6a2-4e70-ba5b-727c20098526-tmpfs\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007557 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bf4018e-9078-4293-8f8e-f6ab7567943a-secret-volume\") pod \"collect-profiles-29490510-6h9v5\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007649 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/72d21973-62df-4217-880d-04c600804b8d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dk6pz\" (UID: \"72d21973-62df-4217-880d-04c600804b8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007674 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9df00c6a-36e4-454c-9bd0-bf7be360fedf-srv-cert\") pod \"catalog-operator-68c6474976-xbdxs\" (UID: \"9df00c6a-36e4-454c-9bd0-bf7be360fedf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007849 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b92eec64-c286-4244-9e62-a5cd7ab680ae-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007895 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9df00c6a-36e4-454c-9bd0-bf7be360fedf-profile-collector-cert\") pod \"catalog-operator-68c6474976-xbdxs\" (UID: \"9df00c6a-36e4-454c-9bd0-bf7be360fedf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007910 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-csi-data-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.007998 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7742150d-7cf8-487e-a375-a39ce7caa256-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2vlbm\" (UID: \"7742150d-7cf8-487e-a375-a39ce7caa256\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:55 crc kubenswrapper[4881]: E0126 12:37:55.010846 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:55.510778879 +0000 UTC m=+147.990088905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.015822 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-certificates\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.018426 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7742150d-7cf8-487e-a375-a39ce7caa256-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2vlbm\" (UID: \"7742150d-7cf8-487e-a375-a39ce7caa256\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.024718 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b92eec64-c286-4244-9e62-a5cd7ab680ae-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.025576 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-trusted-ca\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.027231 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t87hc" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.028709 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d1e9ff-e409-4d1c-a6ba-a795e9926379-config\") pod \"service-ca-operator-777779d784-tqtg4\" (UID: \"40d1e9ff-e409-4d1c-a6ba-a795e9926379\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.036723 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/72d21973-62df-4217-880d-04c600804b8d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dk6pz\" (UID: \"72d21973-62df-4217-880d-04c600804b8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.041953 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7742150d-7cf8-487e-a375-a39ce7caa256-srv-cert\") pod \"olm-operator-6b444d44fb-2vlbm\" (UID: \"7742150d-7cf8-487e-a375-a39ce7caa256\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.043686 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9df00c6a-36e4-454c-9bd0-bf7be360fedf-profile-collector-cert\") pod \"catalog-operator-68c6474976-xbdxs\" (UID: \"9df00c6a-36e4-454c-9bd0-bf7be360fedf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.049269 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.050933 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9df00c6a-36e4-454c-9bd0-bf7be360fedf-srv-cert\") pod \"catalog-operator-68c6474976-xbdxs\" (UID: \"9df00c6a-36e4-454c-9bd0-bf7be360fedf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.052990 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-tls\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.055956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40d1e9ff-e409-4d1c-a6ba-a795e9926379-serving-cert\") pod \"service-ca-operator-777779d784-tqtg4\" (UID: \"40d1e9ff-e409-4d1c-a6ba-a795e9926379\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.059922 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b92eec64-c286-4244-9e62-a5cd7ab680ae-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.078345 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-bound-sa-token\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.095330 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phz6c\" (UniqueName: \"kubernetes.io/projected/7742150d-7cf8-487e-a375-a39ce7caa256-kube-api-access-phz6c\") pod \"olm-operator-6b444d44fb-2vlbm\" (UID: \"7742150d-7cf8-487e-a375-a39ce7caa256\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.125982 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-mountpoint-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.126078 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc866\" (UniqueName: \"kubernetes.io/projected/3bf4018e-9078-4293-8f8e-f6ab7567943a-kube-api-access-zc866\") pod \"collect-profiles-29490510-6h9v5\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.126108 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-plugins-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.126150 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-registration-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.126227 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/62f5dd6c-c6a2-4e70-ba5b-727c20098526-tmpfs\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.126282 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bf4018e-9078-4293-8f8e-f6ab7567943a-secret-volume\") pod \"collect-profiles-29490510-6h9v5\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.126544 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-csi-data-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.126800 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.126826 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-socket-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.126910 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bf4018e-9078-4293-8f8e-f6ab7567943a-config-volume\") pod \"collect-profiles-29490510-6h9v5\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.126971 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62f5dd6c-c6a2-4e70-ba5b-727c20098526-apiservice-cert\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.127014 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62f5dd6c-c6a2-4e70-ba5b-727c20098526-webhook-cert\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.127037 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m5l7\" (UniqueName: \"kubernetes.io/projected/62f5dd6c-c6a2-4e70-ba5b-727c20098526-kube-api-access-4m5l7\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.127095 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8mc\" (UniqueName: \"kubernetes.io/projected/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-kube-api-access-jj8mc\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.127544 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-csi-data-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.128542 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-mountpoint-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.130732 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-plugins-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.131220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-registration-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.132092 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/62f5dd6c-c6a2-4e70-ba5b-727c20098526-tmpfs\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.135059 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bf4018e-9078-4293-8f8e-f6ab7567943a-config-volume\") pod \"collect-profiles-29490510-6h9v5\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.138686 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bf4018e-9078-4293-8f8e-f6ab7567943a-secret-volume\") pod \"collect-profiles-29490510-6h9v5\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.140740 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc"] Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.141167 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.141304 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-socket-dir\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.141994 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvhvk\" (UniqueName: \"kubernetes.io/projected/9df00c6a-36e4-454c-9bd0-bf7be360fedf-kube-api-access-nvhvk\") pod \"catalog-operator-68c6474976-xbdxs\" (UID: \"9df00c6a-36e4-454c-9bd0-bf7be360fedf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:55 crc kubenswrapper[4881]: E0126 12:37:55.155675 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:55.655640982 +0000 UTC m=+148.134951008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.158323 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s5xb\" (UniqueName: \"kubernetes.io/projected/40d1e9ff-e409-4d1c-a6ba-a795e9926379-kube-api-access-2s5xb\") pod \"service-ca-operator-777779d784-tqtg4\" (UID: \"40d1e9ff-e409-4d1c-a6ba-a795e9926379\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.163716 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62f5dd6c-c6a2-4e70-ba5b-727c20098526-webhook-cert\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.166555 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62f5dd6c-c6a2-4e70-ba5b-727c20098526-apiservice-cert\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.195309 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5tsq\" (UniqueName: \"kubernetes.io/projected/72d21973-62df-4217-880d-04c600804b8d-kube-api-access-s5tsq\") pod \"package-server-manager-789f6589d5-dk6pz\" (UID: \"72d21973-62df-4217-880d-04c600804b8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.197375 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9nzc\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-kube-api-access-w9nzc\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.229714 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:55 crc kubenswrapper[4881]: E0126 12:37:55.230042 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:55.730022769 +0000 UTC m=+148.209332795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.248900 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m5l7\" (UniqueName: \"kubernetes.io/projected/62f5dd6c-c6a2-4e70-ba5b-727c20098526-kube-api-access-4m5l7\") pod \"packageserver-d55dfcdfc-dklx7\" (UID: \"62f5dd6c-c6a2-4e70-ba5b-727c20098526\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.252290 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8mc\" (UniqueName: \"kubernetes.io/projected/ed9c86f6-30f0-43fc-87fb-d3497d8a8357-kube-api-access-jj8mc\") pod \"csi-hostpathplugin-972wv\" (UID: \"ed9c86f6-30f0-43fc-87fb-d3497d8a8357\") " pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.263545 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc866\" (UniqueName: \"kubernetes.io/projected/3bf4018e-9078-4293-8f8e-f6ab7567943a-kube-api-access-zc866\") pod \"collect-profiles-29490510-6h9v5\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.283699 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.299589 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.324243 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.332677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: E0126 12:37:55.333002 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:55.832989095 +0000 UTC m=+148.312299121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.353195 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.355759 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.378080 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-972wv" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.383844 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.446713 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:55 crc kubenswrapper[4881]: E0126 12:37:55.447701 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:55.94768067 +0000 UTC m=+148.426990696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.449754 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.548684 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: E0126 12:37:55.549081 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:56.04906504 +0000 UTC m=+148.528375066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.580299 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bpgdv" podStartSLOduration=128.580274281 podStartE2EDuration="2m8.580274281s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:55.57564145 +0000 UTC m=+148.054951496" watchObservedRunningTime="2026-01-26 12:37:55.580274281 +0000 UTC m=+148.059584307" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.625799 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 12:32:54 +0000 UTC, rotation deadline is 2026-10-12 02:46:30.933873871 +0000 UTC Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.625888 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6206h8m35.307988387s for next certificate rotation Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.660039 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:55 crc kubenswrapper[4881]: E0126 12:37:55.660453 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:56.160437436 +0000 UTC m=+148.639747462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.682907 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rxhwr" podStartSLOduration=128.682888719 podStartE2EDuration="2m8.682888719s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:55.612970558 +0000 UTC m=+148.092280584" watchObservedRunningTime="2026-01-26 12:37:55.682888719 +0000 UTC m=+148.162198745" Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.683811 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7"] Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.766228 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: E0126 12:37:55.766665 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:56.266650099 +0000 UTC m=+148.745960125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.867104 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:55 crc kubenswrapper[4881]: E0126 12:37:55.867663 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:56.367648289 +0000 UTC m=+148.846958315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:55 crc kubenswrapper[4881]: I0126 12:37:55.983530 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:55 crc kubenswrapper[4881]: E0126 12:37:55.983801 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:56.483789029 +0000 UTC m=+148.963099055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.040994 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"507f26a271fdb16a2fec33a567a40d886a4133864f320d9639bdcd7882233f06"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.041038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c363f68f1cab38bbdf7fc2a7f0a189e93e0179ce746e8966bc963ed97e5d9e85"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.041733 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.086053 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.086671 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:56.586648722 +0000 UTC m=+149.065958748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.086731 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.090774 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:56.59075996 +0000 UTC m=+149.070069986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.109228 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" podStartSLOduration=129.10921348 podStartE2EDuration="2m9.10921348s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.108746198 +0000 UTC m=+148.588056224" watchObservedRunningTime="2026-01-26 12:37:56.10921348 +0000 UTC m=+148.588523506" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.111588 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" event={"ID":"07671cb3-5957-4f3f-9189-4e3e05d7c090","Type":"ContainerStarted","Data":"c7dc290b085a62549be2bddb7a799a61bd83315e92a0318d7a0587155e115369"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.127862 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" event={"ID":"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f","Type":"ContainerStarted","Data":"ce9d48f1d38a5b54fb8cb2c12afdd8cea79cc7c1612bfa4b0529c4eb24862fbc"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.144773 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-c4vbw" event={"ID":"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c","Type":"ContainerStarted","Data":"118dbee383e41839a1c6272f3b28aa9909fbe4d11b85d8aa985154d677496f6e"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.144820 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-c4vbw" event={"ID":"bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c","Type":"ContainerStarted","Data":"91d76af188a1e616b4238add0f4355eb44fec330c6af58089e18b441e94f45f6"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.157450 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3cd0cdec09a3a3d5c09caa6b139cd4a5c88ebf5e163d0e04d1a10d00f64dabc4"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.187227 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.187630 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:56.687614572 +0000 UTC m=+149.166924588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.189679 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dmhnk" event={"ID":"3a9e8ad0-5902-4e1c-b000-86c3003673f4","Type":"ContainerStarted","Data":"46ae43fcb7c241b4823760b37460e4df2573040a21a40a5c9485622b99a586ad"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.189724 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dmhnk" event={"ID":"3a9e8ad0-5902-4e1c-b000-86c3003673f4","Type":"ContainerStarted","Data":"78a4d89e885b25f6695499bcc66e62114ca182d6eb1c6006cb582954f35bce7e"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.194401 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bca3cb81e5b768402c90709c3c34a63cb717618e89738e499f07462681d9f8f7"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.197998 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" event={"ID":"29d30a67-fb25-4c95-8f8a-a77b27cd695f","Type":"ContainerStarted","Data":"c5e28983b2716923fed3c26f23906142f38a320cf7cc8d5fcd8bba4b86108b9d"} Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.220044 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-68j9g"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.222458 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.331822 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" podStartSLOduration=128.331800878 podStartE2EDuration="2m8.331800878s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.318349258 +0000 UTC m=+148.797659284" watchObservedRunningTime="2026-01-26 12:37:56.331800878 +0000 UTC m=+148.811110904" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.333465 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.350217 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:56.850197025 +0000 UTC m=+149.329507051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.397803 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" podStartSLOduration=129.397781316 podStartE2EDuration="2m9.397781316s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.39542213 +0000 UTC m=+148.874732166" watchObservedRunningTime="2026-01-26 12:37:56.397781316 +0000 UTC m=+148.877091332" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.401215 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-b5l77"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.421603 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2fsrg"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.429406 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.448131 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-2n5pc" podStartSLOduration=128.448114842 podStartE2EDuration="2m8.448114842s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.4379521 +0000 UTC m=+148.917262126" watchObservedRunningTime="2026-01-26 12:37:56.448114842 +0000 UTC m=+148.927424858" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.460668 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.461069 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:56.961050899 +0000 UTC m=+149.440360925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.486405 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkq9v" podStartSLOduration=129.486385951 podStartE2EDuration="2m9.486385951s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.484413665 +0000 UTC m=+148.963723691" watchObservedRunningTime="2026-01-26 12:37:56.486385951 +0000 UTC m=+148.965695977" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.572019 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.572611 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.0725967 +0000 UTC m=+149.551906726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.607116 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" podStartSLOduration=128.607097329 podStartE2EDuration="2m8.607097329s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.606573417 +0000 UTC m=+149.085883453" watchObservedRunningTime="2026-01-26 12:37:56.607097329 +0000 UTC m=+149.086407355" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.673673 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.674004 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.173989789 +0000 UTC m=+149.653299815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.684009 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.700210 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s5qng" podStartSLOduration=129.700193732 podStartE2EDuration="2m9.700193732s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.660075748 +0000 UTC m=+149.139385774" watchObservedRunningTime="2026-01-26 12:37:56.700193732 +0000 UTC m=+149.179503758" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.701588 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" podStartSLOduration=128.701582715 podStartE2EDuration="2m8.701582715s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.699256419 +0000 UTC m=+149.178566445" watchObservedRunningTime="2026-01-26 12:37:56.701582715 +0000 UTC m=+149.180892741" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.732631 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.739630 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cwb4s"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.744639 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-c4vbw" podStartSLOduration=128.744629637 podStartE2EDuration="2m8.744629637s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.73674874 +0000 UTC m=+149.216058766" watchObservedRunningTime="2026-01-26 12:37:56.744629637 +0000 UTC m=+149.223939653" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.775471 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.775771 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.275757287 +0000 UTC m=+149.755067313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.777616 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" podStartSLOduration=128.777603731 podStartE2EDuration="2m8.777603731s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.773598045 +0000 UTC m=+149.252908071" watchObservedRunningTime="2026-01-26 12:37:56.777603731 +0000 UTC m=+149.256913757" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.790809 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.810544 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.811412 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dmhnk" podStartSLOduration=5.811395554 podStartE2EDuration="5.811395554s" podCreationTimestamp="2026-01-26 12:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:56.808013454 +0000 UTC m=+149.287323480" watchObservedRunningTime="2026-01-26 12:37:56.811395554 +0000 UTC m=+149.290705580" Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.876286 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.876447 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.376421159 +0000 UTC m=+149.855731185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.876571 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.876934 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.376920271 +0000 UTC m=+149.856230297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.928375 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2xt9h"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.938774 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm"] Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.978127 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.978336 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.478301709 +0000 UTC m=+149.957611745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:56 crc kubenswrapper[4881]: I0126 12:37:56.978457 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:56 crc kubenswrapper[4881]: E0126 12:37:56.978781 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.478770711 +0000 UTC m=+149.958080747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.079852 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.080014 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.579991656 +0000 UTC m=+150.059301692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.080077 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.080448 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.580436956 +0000 UTC m=+150.059746982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.085647 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.085724 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.091433 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.091496 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.101091 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.147401 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:37:57 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:37:57 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:37:57 crc kubenswrapper[4881]: healthz check failed Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.147479 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.158846 4881 patch_prober.go:28] interesting pod/apiserver-76f77b778f-b8l9s container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]log ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]etcd ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]poststarthook/max-in-flight-filter ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 12:37:57 crc kubenswrapper[4881]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 12:37:57 crc kubenswrapper[4881]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 12:37:57 crc kubenswrapper[4881]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]poststarthook/openshift.io-startinformers ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 12:37:57 crc kubenswrapper[4881]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 12:37:57 crc kubenswrapper[4881]: livez check failed Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.158911 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" podUID="ef2ceed1-1060-4aff-a9f7-573f60a80771" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:37:57 crc kubenswrapper[4881]: W0126 12:37:57.166192 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cd7d43a_b82c_423b_ac88_ac99d9b753aa.slice/crio-1b7d904b8f137d713591ae573d7c86bee4deb9aed6a4f326d5e038dcfffa9e9e WatchSource:0}: Error finding container 1b7d904b8f137d713591ae573d7c86bee4deb9aed6a4f326d5e038dcfffa9e9e: Status 404 returned error can't find the container with id 1b7d904b8f137d713591ae573d7c86bee4deb9aed6a4f326d5e038dcfffa9e9e Jan 26 12:37:57 crc kubenswrapper[4881]: W0126 12:37:57.168794 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde07a342_44f0_45cc_a461_5fd5a70e34d9.slice/crio-43a5c54a12bf8ec7474414eff104a73582e9ce3c94ecaa73d5e943e49b4f9a27 WatchSource:0}: Error finding container 43a5c54a12bf8ec7474414eff104a73582e9ce3c94ecaa73d5e943e49b4f9a27: Status 404 returned error can't find the container with id 43a5c54a12bf8ec7474414eff104a73582e9ce3c94ecaa73d5e943e49b4f9a27 Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.184503 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.185097 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.685072553 +0000 UTC m=+150.164382579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.204177 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.211983 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nlqwj"] Jan 26 12:37:57 crc kubenswrapper[4881]: W0126 12:37:57.276053 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf94cc1c9_6219_48e2_8033_aecea365cacb.slice/crio-17400a09027924477f3b51d3f497070d2ff8484e7701546aefc6d9ca7f98476c WatchSource:0}: Error finding container 17400a09027924477f3b51d3f497070d2ff8484e7701546aefc6d9ca7f98476c: Status 404 returned error can't find the container with id 17400a09027924477f3b51d3f497070d2ff8484e7701546aefc6d9ca7f98476c Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.276487 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.276542 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-68j9g" event={"ID":"30c09cc7-c747-494b-80ac-e4a780f65fb6","Type":"ContainerStarted","Data":"82309568b5ac99c7c9a91d1c3888739f37a4b44826abdeaeba49f66b7e73454c"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.283402 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t87hc"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.289916 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.290292 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.790276062 +0000 UTC m=+150.269586088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.290962 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j" event={"ID":"ec9077c6-4f92-4925-8efa-8f6351967ae7","Type":"ContainerStarted","Data":"d3ca6044a47fed6db73e513db496bdb41599e61d2663edea0c3e71aa1a587eff"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.298652 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.298839 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gmbhc" event={"ID":"29d30a67-fb25-4c95-8f8a-a77b27cd695f","Type":"ContainerStarted","Data":"77d3ab6c6116fbc8417b014b0cac74a0d9c2f515755ef0ca65a0f97c766fe95f"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.317571 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k8g4d"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.318368 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" event={"ID":"96eca703-29ae-4ec1-a961-e3303788da4f","Type":"ContainerStarted","Data":"28446daab21c776e5916fde10f7424f38a5fec830f8c4fe9777b56e057e518f4"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.322853 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drv9q"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.323695 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.326393 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.330506 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-972wv"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.330601 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.333827 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2fsrg" event={"ID":"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da","Type":"ContainerStarted","Data":"4de1124f2bf607023cb2824a4739221240a71a256091e06629cb527e1cc32ff9"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.336565 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.336609 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cwb4s" event={"ID":"de07a342-44f0-45cc-a461-5fd5a70e34d9","Type":"ContainerStarted","Data":"43a5c54a12bf8ec7474414eff104a73582e9ce3c94ecaa73d5e943e49b4f9a27"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.338613 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" event={"ID":"07671cb3-5957-4f3f-9189-4e3e05d7c090","Type":"ContainerStarted","Data":"d8be9f52de037689ef09a09557132b4e39a1096ec885dbf86483bf1504960c59"} Jan 26 12:37:57 crc kubenswrapper[4881]: W0126 12:37:57.340124 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7157505d_d18a_42a4_8037_96ad9a7825ce.slice/crio-729dd497508ee615581b3a039285e39d222671d48a4b30a28f9c71feb7f79f3c WatchSource:0}: Error finding container 729dd497508ee615581b3a039285e39d222671d48a4b30a28f9c71feb7f79f3c: Status 404 returned error can't find the container with id 729dd497508ee615581b3a039285e39d222671d48a4b30a28f9c71feb7f79f3c Jan 26 12:37:57 crc kubenswrapper[4881]: W0126 12:37:57.340432 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod517de9f8_a681_40a3_bd6b_8009ae963398.slice/crio-beceb6e3608b91c4c451975503697b5205c931f98abe1d0701cf3cee9fd1afe3 WatchSource:0}: Error finding container beceb6e3608b91c4c451975503697b5205c931f98abe1d0701cf3cee9fd1afe3: Status 404 returned error can't find the container with id beceb6e3608b91c4c451975503697b5205c931f98abe1d0701cf3cee9fd1afe3 Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.343946 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.345022 4881 generic.go:334] "Generic (PLEG): container finished" podID="0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f" containerID="1947703ca580987fe095cd790eb26cf9873e40bfd960e2369590aac0d87c2a15" exitCode=0 Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.345114 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" event={"ID":"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f","Type":"ContainerDied","Data":"1947703ca580987fe095cd790eb26cf9873e40bfd960e2369590aac0d87c2a15"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.350540 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.351311 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vnnxb"] Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.391346 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.391820 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.891790705 +0000 UTC m=+150.371100731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.392244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.392680 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.892672045 +0000 UTC m=+150.371982071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.403683 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" event={"ID":"d6b7645c-9920-4793-b6aa-9a6664cc93a0","Type":"ContainerStarted","Data":"d54dbbd2ddffcd208f32903ce176e4ffe55b6a226b12466a956158f8d3916063"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.429935 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" event={"ID":"1cd7d43a-b82c-423b-ac88-ac99d9b753aa","Type":"ContainerStarted","Data":"1b7d904b8f137d713591ae573d7c86bee4deb9aed6a4f326d5e038dcfffa9e9e"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.431507 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fsrl8" podStartSLOduration=129.431492948 podStartE2EDuration="2m9.431492948s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:57.398828852 +0000 UTC m=+149.878138898" watchObservedRunningTime="2026-01-26 12:37:57.431492948 +0000 UTC m=+149.910802974" Jan 26 12:37:57 crc kubenswrapper[4881]: W0126 12:37:57.448992 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40d1e9ff_e409_4d1c_a6ba_a795e9926379.slice/crio-189d46c7e7aded1431788629c532c15fccf3c26808449d0030c6d436f6b449b6 WatchSource:0}: Error finding container 189d46c7e7aded1431788629c532c15fccf3c26808449d0030c6d436f6b449b6: Status 404 returned error can't find the container with id 189d46c7e7aded1431788629c532c15fccf3c26808449d0030c6d436f6b449b6 Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.453249 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" event={"ID":"7c68c037-053a-43f1-a2d6-0a4387610916","Type":"ContainerStarted","Data":"89b6e0d64bd33ba3f87c5f758c1bd6cc4ab0b30a8f002c8871c29b8c2f7f51c9"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.466428 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" event={"ID":"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae","Type":"ContainerStarted","Data":"0268553ad160f5cbeaff95111a60c8832d7c170e1dbba0b47945bcae06050c98"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.474148 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-b5l77" event={"ID":"21710e70-1118-4472-91e1-2c7c66e9fe75","Type":"ContainerStarted","Data":"bed9e54c42cbbe2fe70954559a1c38538721b7c627fa700cbd2ab5fd5f6f1245"} Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.475431 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" podStartSLOduration=129.475418181 podStartE2EDuration="2m9.475418181s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:57.474458409 +0000 UTC m=+149.953768445" watchObservedRunningTime="2026-01-26 12:37:57.475418181 +0000 UTC m=+149.954728207" Jan 26 12:37:57 crc kubenswrapper[4881]: W0126 12:37:57.480561 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9df00c6a_36e4_454c_9bd0_bf7be360fedf.slice/crio-394272e7abe2b99237a32bed07ae273d47ea46b32eda64fc0e64a85126864a5d WatchSource:0}: Error finding container 394272e7abe2b99237a32bed07ae273d47ea46b32eda64fc0e64a85126864a5d: Status 404 returned error can't find the container with id 394272e7abe2b99237a32bed07ae273d47ea46b32eda64fc0e64a85126864a5d Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.489372 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wm95t" Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.493136 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.494392 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:57.994375392 +0000 UTC m=+150.473685418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.595140 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.597091 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.097079492 +0000 UTC m=+150.576389518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.687914 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:37:57 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:37:57 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:37:57 crc kubenswrapper[4881]: healthz check failed Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.687989 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.696190 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.696338 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.19629812 +0000 UTC m=+150.675608146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.696506 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.696777 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.196770311 +0000 UTC m=+150.676080327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.797995 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.798161 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.29813636 +0000 UTC m=+150.777446386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.798325 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.798645 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.298637041 +0000 UTC m=+150.777947067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:57 crc kubenswrapper[4881]: I0126 12:37:57.900148 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:57 crc kubenswrapper[4881]: E0126 12:37:57.900674 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.400649365 +0000 UTC m=+150.879959421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.001864 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.002175 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.502159798 +0000 UTC m=+150.981469824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.103332 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.103692 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.603674959 +0000 UTC m=+151.082984985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.209350 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.209708 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.709693039 +0000 UTC m=+151.189003065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.310790 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.311445 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.811426775 +0000 UTC m=+151.290736801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.412493 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.412984 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:58.912963498 +0000 UTC m=+151.392273524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.482959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" event={"ID":"517de9f8-a681-40a3-bd6b-8009ae963398","Type":"ContainerStarted","Data":"beceb6e3608b91c4c451975503697b5205c931f98abe1d0701cf3cee9fd1afe3"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.484658 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" event={"ID":"3bf4018e-9078-4293-8f8e-f6ab7567943a","Type":"ContainerStarted","Data":"6071074a8d15a1df9ea0a1b0161e1f98baf676354d80109d36d61a4203d2f419"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.487321 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" event={"ID":"012f52fd-cf18-4590-98de-2d52c5384600","Type":"ContainerStarted","Data":"9d685d873cc28debf838b380b34d2c348eea1ff8d9a9314350c460c92cc976e4"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.492801 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j" event={"ID":"ec9077c6-4f92-4925-8efa-8f6351967ae7","Type":"ContainerStarted","Data":"e8e88e467c5ba6258640b38a11cf60c6c3f7bf5b9ff66affd0c39d72bec2ed2b"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.494444 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pv5pl" event={"ID":"7c68c037-053a-43f1-a2d6-0a4387610916","Type":"ContainerStarted","Data":"7d86cc4c8279c30cc10395484d3689d82ad13cdb81f536a8b49aee125a64cb4f"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.499710 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-68j9g" event={"ID":"30c09cc7-c747-494b-80ac-e4a780f65fb6","Type":"ContainerStarted","Data":"66741b56934d02cfcb3c03fb3dbc30ff8014d98c8eacaa7f4767e85f662d8b18"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.500761 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-68j9g" Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.505881 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" event={"ID":"d7d9099c-13e2-4b7e-817e-9917ca9b28fb","Type":"ContainerStarted","Data":"a18fef768f1419957ea1ab2d6efaea0dc73d3dee765d9ecaeadfe98cd6aa9328"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.506922 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-972wv" event={"ID":"ed9c86f6-30f0-43fc-87fb-d3497d8a8357","Type":"ContainerStarted","Data":"6d248e5f5052160ce5d45f3b6fdc22e110957a286d417ad4a2e2b769678df3c0"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.510307 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" event={"ID":"bc3ce0ab-d53f-4504-b1be-09cd3629c5ae","Type":"ContainerStarted","Data":"a4d64ddf63174d3abab31b84dea9eb942de35e37dd90ce4cb7f60612af0d31ac"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.512307 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-68j9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.512377 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-68j9g" podUID="30c09cc7-c747-494b-80ac-e4a780f65fb6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.512927 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.513076 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.013051096 +0000 UTC m=+151.492361122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.513174 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.513553 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.013540128 +0000 UTC m=+151.492850154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.514124 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" event={"ID":"9df00c6a-36e4-454c-9bd0-bf7be360fedf","Type":"ContainerStarted","Data":"394272e7abe2b99237a32bed07ae273d47ea46b32eda64fc0e64a85126864a5d"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.521036 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" event={"ID":"f94cc1c9-6219-48e2-8033-aecea365cacb","Type":"ContainerStarted","Data":"17400a09027924477f3b51d3f497070d2ff8484e7701546aefc6d9ca7f98476c"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.523543 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" event={"ID":"d6b7645c-9920-4793-b6aa-9a6664cc93a0","Type":"ContainerStarted","Data":"0c04b15cb9ebd74162e447dbdf6ac15229da4a7750a6b284cdad7e8a1a7f8954"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.529193 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" event={"ID":"993b16a8-4172-41dc-90fc-7d420d0a12f2","Type":"ContainerStarted","Data":"31c36682c0a7efd7f6276571ce3ba27923cf7603380e594650b50f2e70b999b0"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.534189 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-b5l77" event={"ID":"21710e70-1118-4472-91e1-2c7c66e9fe75","Type":"ContainerStarted","Data":"4311b4e105bdb43177746998b163e770a127863dd7f8e7f709e84dc8f6e161c9"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.542551 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" event={"ID":"7742150d-7cf8-487e-a375-a39ce7caa256","Type":"ContainerStarted","Data":"185c4f47f6b9f5b9244e4d6480b36d05ebcac6889303fbb2df808c6774a1c604"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.559544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" event={"ID":"40d1e9ff-e409-4d1c-a6ba-a795e9926379","Type":"ContainerStarted","Data":"189d46c7e7aded1431788629c532c15fccf3c26808449d0030c6d436f6b449b6"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.565870 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" event={"ID":"72d21973-62df-4217-880d-04c600804b8d","Type":"ContainerStarted","Data":"bff574edf60eb86e022b2c22af02829f20659acf91ec56767b0e29f9bcde591e"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.572088 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" event={"ID":"7157505d-d18a-42a4-8037-96ad9a7825ce","Type":"ContainerStarted","Data":"729dd497508ee615581b3a039285e39d222671d48a4b30a28f9c71feb7f79f3c"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.573037 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" event={"ID":"62f5dd6c-c6a2-4e70-ba5b-727c20098526","Type":"ContainerStarted","Data":"4935db936f6f44fbb897bd0896e498a86651ce5d4b7b5d46f7ecb27f1f114a51"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.573864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t87hc" event={"ID":"f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f","Type":"ContainerStarted","Data":"2ce310cebc56005e30d33ef8f83cacaaab46be14f253f30498e785e316c212ec"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.574606 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" event={"ID":"c72452bc-3cf5-4c8a-a133-2789adbaa573","Type":"ContainerStarted","Data":"d0747f29c1bf13a6c7dca8925137c3a7bea70a11d70b411ca450059733b405cb"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.593168 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" event={"ID":"9b2b7357-ae8b-474d-942e-56c296ace395","Type":"ContainerStarted","Data":"409429c2423ff421c9bb6df8f5513c1c9f6d58567922caa4b3b0a219db0a95d5"} Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.614459 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.614917 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.114888716 +0000 UTC m=+151.594198732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.615452 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.618300 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.118280327 +0000 UTC m=+151.597590353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.620413 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mgcg9" podStartSLOduration=130.620396347 podStartE2EDuration="2m10.620396347s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:58.611021984 +0000 UTC m=+151.090332010" watchObservedRunningTime="2026-01-26 12:37:58.620396347 +0000 UTC m=+151.099706373" Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.640690 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-68j9g" podStartSLOduration=130.640671929 podStartE2EDuration="2m10.640671929s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:58.639032129 +0000 UTC m=+151.118342155" watchObservedRunningTime="2026-01-26 12:37:58.640671929 +0000 UTC m=+151.119981955" Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.664069 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p7bh2" podStartSLOduration=130.664050204 podStartE2EDuration="2m10.664050204s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:58.663272956 +0000 UTC m=+151.142583002" watchObservedRunningTime="2026-01-26 12:37:58.664050204 +0000 UTC m=+151.143360230" Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.687719 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:37:58 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:37:58 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:37:58 crc kubenswrapper[4881]: healthz check failed Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.687769 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.716584 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.716819 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.216792498 +0000 UTC m=+151.696102524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.717146 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.719071 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.219054612 +0000 UTC m=+151.698364638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.818444 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.819556 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.319465067 +0000 UTC m=+151.798775123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:58 crc kubenswrapper[4881]: I0126 12:37:58.921577 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:58 crc kubenswrapper[4881]: E0126 12:37:58.921887 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.42187381 +0000 UTC m=+151.901183836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.023145 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.023316 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.52329015 +0000 UTC m=+152.002600176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.023602 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.023912 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.523901484 +0000 UTC m=+152.003211510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.124450 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.124602 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.624581537 +0000 UTC m=+152.103891563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.124833 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.125123 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.6251162 +0000 UTC m=+152.104426216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.226254 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.226437 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.726420117 +0000 UTC m=+152.205730143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.226632 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.226912 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.726905298 +0000 UTC m=+152.206215324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.327827 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.328031 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.82800537 +0000 UTC m=+152.307315396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.328340 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.328639 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.828628025 +0000 UTC m=+152.307938051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.428879 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.429069 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.929031661 +0000 UTC m=+152.408341717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.429152 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.429461 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:37:59.929446081 +0000 UTC m=+152.408756107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.531164 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.531387 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.031356893 +0000 UTC m=+152.510666949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.531611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.532053 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.032036198 +0000 UTC m=+152.511346254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.599976 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" event={"ID":"517de9f8-a681-40a3-bd6b-8009ae963398","Type":"ContainerStarted","Data":"a40745ebb79ae69374ab36cb1088656f1b2f0122a8286f35b7f45ced370ebe11"} Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.601319 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" event={"ID":"c72452bc-3cf5-4c8a-a133-2789adbaa573","Type":"ContainerStarted","Data":"7c7afab2348246d701263ada5de43f496bc73f240f423e8d79dd032ad736c20c"} Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.603022 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2fsrg" event={"ID":"1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da","Type":"ContainerStarted","Data":"2fec699057b24a03ab4b37808008b0f456cafd2db8b4cdf92a5a04eb93ec8276"} Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.603227 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.607075 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" event={"ID":"96eca703-29ae-4ec1-a961-e3303788da4f","Type":"ContainerStarted","Data":"40912548b6c7d8681b3081a1a06d996cec3dbae70b9d6800b522bbfe3c83b175"} Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.608579 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" event={"ID":"3bf4018e-9078-4293-8f8e-f6ab7567943a","Type":"ContainerStarted","Data":"25bb4b559ced09ec8791787a057e14ef76ba787285a6ce95bdbf493b84cdfd08"} Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.611354 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cwb4s" event={"ID":"de07a342-44f0-45cc-a461-5fd5a70e34d9","Type":"ContainerStarted","Data":"53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e"} Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.619204 4881 patch_prober.go:28] interesting pod/console-operator-58897d9998-2fsrg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.619364 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2fsrg" podUID="1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.621603 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" event={"ID":"62f5dd6c-c6a2-4e70-ba5b-727c20098526","Type":"ContainerStarted","Data":"e22ab1e6ee6a1d81e47ac37fc55cd28a28dba745c07942163d5ebd06062cc967"} Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.621817 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-68j9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.621861 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-68j9g" podUID="30c09cc7-c747-494b-80ac-e4a780f65fb6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.632764 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.632952 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.132925806 +0000 UTC m=+152.612235832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.633151 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.633567 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.133551591 +0000 UTC m=+152.612861617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.633602 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cfwgh" podStartSLOduration=131.633586772 podStartE2EDuration="2m11.633586772s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:59.632454515 +0000 UTC m=+152.111764541" watchObservedRunningTime="2026-01-26 12:37:59.633586772 +0000 UTC m=+152.112896798" Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.687713 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:37:59 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:37:59 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:37:59 crc kubenswrapper[4881]: healthz check failed Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.687776 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.734688 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.734881 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.234851948 +0000 UTC m=+152.714161984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.735267 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.735738 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.235716688 +0000 UTC m=+152.715026784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.836206 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.836450 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.336398911 +0000 UTC m=+152.815708947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.837029 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.837701 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.337676081 +0000 UTC m=+152.816986277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:37:59 crc kubenswrapper[4881]: I0126 12:37:59.938715 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:37:59 crc kubenswrapper[4881]: E0126 12:37:59.942199 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.442164533 +0000 UTC m=+152.921474559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.043393 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.043863 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.54384699 +0000 UTC m=+153.023157016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.148062 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.148448 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.648425194 +0000 UTC m=+153.127735220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.249280 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.249664 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.74965291 +0000 UTC m=+153.228962926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.350793 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.351380 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.851170482 +0000 UTC m=+153.330480508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.452768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.453067 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:00.953039782 +0000 UTC m=+153.432349808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.553538 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.553802 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.053774106 +0000 UTC m=+153.533084162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.553879 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.554175 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.054166135 +0000 UTC m=+153.533476161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.625283 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-b5l77" event={"ID":"21710e70-1118-4472-91e1-2c7c66e9fe75","Type":"ContainerStarted","Data":"5d655ac2fc45249b16d9a4de6279e494cfe2036d46262dddc68cb9bbc640483c"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.625543 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-b5l77" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.626962 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t87hc" event={"ID":"f659aacf-8ad1-4f5b-b6b6-ae9f0114d14f","Type":"ContainerStarted","Data":"923002ccc4f28fe6b62b9a625c1be79c974beac3013c9cf46084028ef81d3fc8"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.629293 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" event={"ID":"0a192f0e-fe6b-434b-a63e-7a61fcd9ca2f","Type":"ContainerStarted","Data":"fc506659cc9e1552cdd9225487ac969c7ae3e1e33b6d3ae0fb1d5c38079b04bb"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.629432 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.631024 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" event={"ID":"f94cc1c9-6219-48e2-8033-aecea365cacb","Type":"ContainerStarted","Data":"90aa63a8adc6faab680560f4f163b3b93a71dd06bfc83e774a21abe340d35e0a"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.632972 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" event={"ID":"40d1e9ff-e409-4d1c-a6ba-a795e9926379","Type":"ContainerStarted","Data":"4470b5d3abded03b05b486efafe46dcad5d861984546d8073f5043ed1607860e"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.635355 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" event={"ID":"9b2b7357-ae8b-474d-942e-56c296ace395","Type":"ContainerStarted","Data":"911691fec9c235f790ac0de5d01fb3f1b2d9cbe98982956a66f9a4b27683d1c4"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.637115 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" event={"ID":"1cd7d43a-b82c-423b-ac88-ac99d9b753aa","Type":"ContainerStarted","Data":"244b7feb7a1772e5ab92b11a62b590cbe1b8b92d177bb94b1e67bc7ed9c9a86e"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.638859 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" event={"ID":"993b16a8-4172-41dc-90fc-7d420d0a12f2","Type":"ContainerStarted","Data":"6021529d5ae00d6d57ad81bfbbbdb9db46764929870cd316c358da7ca0299266"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.641677 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j" event={"ID":"ec9077c6-4f92-4925-8efa-8f6351967ae7","Type":"ContainerStarted","Data":"6bc16af97f3c823212e4e338ef77d89d15a8f1218f798186f7fa0f9f2b943c05"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.643783 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" event={"ID":"9df00c6a-36e4-454c-9bd0-bf7be360fedf","Type":"ContainerStarted","Data":"ad8e54e375d23e6080b00a86fe149dffd2d6ee39bb82d1986b8019a835d9c6d9"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.643912 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.647376 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xbdxs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.647437 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" podUID="9df00c6a-36e4-454c-9bd0-bf7be360fedf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.649156 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" event={"ID":"7742150d-7cf8-487e-a375-a39ce7caa256","Type":"ContainerStarted","Data":"ba9142283897555393625694796a9e63258c0482e0988b6c0a9498f786eed867"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.649558 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2fsrg" podStartSLOduration=132.649543451 podStartE2EDuration="2m12.649543451s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:37:59.661485485 +0000 UTC m=+152.140795531" watchObservedRunningTime="2026-01-26 12:38:00.649543451 +0000 UTC m=+153.128853477" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.649628 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.651457 4881 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2vlbm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.651587 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" podUID="7742150d-7cf8-487e-a375-a39ce7caa256" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.654587 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.655562 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.155497663 +0000 UTC m=+153.634807729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.657196 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" event={"ID":"72d21973-62df-4217-880d-04c600804b8d","Type":"ContainerStarted","Data":"c54a06402930038f99d67a814b2b36118055aacd42e633df99189501e5cffdd7"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.659803 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" event={"ID":"7157505d-d18a-42a4-8037-96ad9a7825ce","Type":"ContainerStarted","Data":"02ecd3da27e858bf8b74970c57d8876aca3a0ab2dd7be59babeaeafbe580b60e"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.660343 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.662856 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-drv9q container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.662963 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" podUID="7157505d-d18a-42a4-8037-96ad9a7825ce" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.667022 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" event={"ID":"d7d9099c-13e2-4b7e-817e-9917ca9b28fb","Type":"ContainerStarted","Data":"2d073dda2cebc4125e97bf4274d436ccb9d79d9150fcac14e19109b9c14a3b65"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.669192 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" event={"ID":"96eca703-29ae-4ec1-a961-e3303788da4f","Type":"ContainerStarted","Data":"e52c1e58d1d865301d592be9c4538739932c803717b46451636c899bada08e52"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.672574 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" event={"ID":"012f52fd-cf18-4590-98de-2d52c5384600","Type":"ContainerStarted","Data":"0722c01a192aa5f95618221179c6f024cc7e4dad0cbdc6bff5d24e62463ebaa8"} Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.672901 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-b5l77" podStartSLOduration=9.672890526 podStartE2EDuration="9.672890526s" podCreationTimestamp="2026-01-26 12:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.646336995 +0000 UTC m=+153.125647061" watchObservedRunningTime="2026-01-26 12:38:00.672890526 +0000 UTC m=+153.152200552" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.673202 4881 patch_prober.go:28] interesting pod/console-operator-58897d9998-2fsrg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.673256 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2fsrg" podUID="1ea8ad00-fb4c-4f75-a1e6-1ea2bd64c8da" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.673339 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-68j9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.673392 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-68j9g" podUID="30c09cc7-c747-494b-80ac-e4a780f65fb6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.673345 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.674837 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" podStartSLOduration=133.674830982 podStartE2EDuration="2m13.674830982s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.672693132 +0000 UTC m=+153.152003198" watchObservedRunningTime="2026-01-26 12:38:00.674830982 +0000 UTC m=+153.154141018" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.675439 4881 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dklx7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.675501 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" podUID="62f5dd6c-c6a2-4e70-ba5b-727c20098526" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.693304 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:38:00 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:38:00 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:38:00 crc kubenswrapper[4881]: healthz check failed Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.693410 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.709921 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j7b4j" podStartSLOduration=132.709901865 podStartE2EDuration="2m12.709901865s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.709715741 +0000 UTC m=+153.189025777" watchObservedRunningTime="2026-01-26 12:38:00.709901865 +0000 UTC m=+153.189211891" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.711079 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" podStartSLOduration=132.711072863 podStartE2EDuration="2m12.711072863s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.690104515 +0000 UTC m=+153.169414541" watchObservedRunningTime="2026-01-26 12:38:00.711072863 +0000 UTC m=+153.190382889" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.728569 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tqtg4" podStartSLOduration=132.728549309 podStartE2EDuration="2m12.728549309s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.725406065 +0000 UTC m=+153.204716101" watchObservedRunningTime="2026-01-26 12:38:00.728549309 +0000 UTC m=+153.207859335" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.749887 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-t87hc" podStartSLOduration=9.749867365 podStartE2EDuration="9.749867365s" podCreationTimestamp="2026-01-26 12:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.748020362 +0000 UTC m=+153.227330378" watchObservedRunningTime="2026-01-26 12:38:00.749867365 +0000 UTC m=+153.229177391" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.757204 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.759246 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.259228608 +0000 UTC m=+153.738538734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.768470 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" podStartSLOduration=132.768437726 podStartE2EDuration="2m12.768437726s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.766624864 +0000 UTC m=+153.245934880" watchObservedRunningTime="2026-01-26 12:38:00.768437726 +0000 UTC m=+153.247747752" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.787975 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-k8g4d" podStartSLOduration=132.78795475 podStartE2EDuration="2m12.78795475s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.783705149 +0000 UTC m=+153.263015185" watchObservedRunningTime="2026-01-26 12:38:00.78795475 +0000 UTC m=+153.267264776" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.809459 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vnnxb" podStartSLOduration=132.809443611 podStartE2EDuration="2m12.809443611s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.80729484 +0000 UTC m=+153.286604866" watchObservedRunningTime="2026-01-26 12:38:00.809443611 +0000 UTC m=+153.288753637" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.859135 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.859535 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.35949352 +0000 UTC m=+153.838803546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.863719 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-27qqg" podStartSLOduration=132.86370264 podStartE2EDuration="2m12.86370264s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.841739889 +0000 UTC m=+153.321049915" watchObservedRunningTime="2026-01-26 12:38:00.86370264 +0000 UTC m=+153.343012666" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.866150 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" podStartSLOduration=132.866141068 podStartE2EDuration="2m12.866141068s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.864699204 +0000 UTC m=+153.344009230" watchObservedRunningTime="2026-01-26 12:38:00.866141068 +0000 UTC m=+153.345451094" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.888883 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" podStartSLOduration=133.888863128 podStartE2EDuration="2m13.888863128s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.888599572 +0000 UTC m=+153.367909608" watchObservedRunningTime="2026-01-26 12:38:00.888863128 +0000 UTC m=+153.368173154" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.910433 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cwb4s" podStartSLOduration=132.91041205 podStartE2EDuration="2m12.91041205s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.908598577 +0000 UTC m=+153.387908603" watchObservedRunningTime="2026-01-26 12:38:00.91041205 +0000 UTC m=+153.389722076" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.954486 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2xt9h" podStartSLOduration=132.954468267 podStartE2EDuration="2m12.954468267s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.952780427 +0000 UTC m=+153.432090453" watchObservedRunningTime="2026-01-26 12:38:00.954468267 +0000 UTC m=+153.433778293" Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.960614 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:00 crc kubenswrapper[4881]: E0126 12:38:00.960974 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.460943671 +0000 UTC m=+153.940253697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:00 crc kubenswrapper[4881]: I0126 12:38:00.998596 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" podStartSLOduration=132.998576965 podStartE2EDuration="2m12.998576965s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:00.997859338 +0000 UTC m=+153.477169364" watchObservedRunningTime="2026-01-26 12:38:00.998576965 +0000 UTC m=+153.477886991" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.061829 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.062077 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.562041503 +0000 UTC m=+154.041351529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.062151 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.062578 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.562567865 +0000 UTC m=+154.041877971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.163846 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.164034 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.664002516 +0000 UTC m=+154.143312542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.164249 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.164616 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.66460454 +0000 UTC m=+154.143914566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.265191 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.265384 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.765354164 +0000 UTC m=+154.244664200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.265434 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.265804 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.765795644 +0000 UTC m=+154.245105660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.366831 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.367306 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.867289196 +0000 UTC m=+154.346599222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.468248 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.468673 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:01.968656374 +0000 UTC m=+154.447966400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.569165 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.569492 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.06947882 +0000 UTC m=+154.548788846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.670201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.670585 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.170572212 +0000 UTC m=+154.649882228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.678630 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" event={"ID":"993b16a8-4172-41dc-90fc-7d420d0a12f2","Type":"ContainerStarted","Data":"7cb3be15b307b7f61f7624f9c32ca68713917ad6808a085a77aef011b6267195"} Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.681013 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-972wv" event={"ID":"ed9c86f6-30f0-43fc-87fb-d3497d8a8357","Type":"ContainerStarted","Data":"11cb3d0711a0b4e39ab1bb32b181d37d6639449dc20864c0d6f9a0abe008d860"} Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.682738 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" event={"ID":"f94cc1c9-6219-48e2-8033-aecea365cacb","Type":"ContainerStarted","Data":"21076b081e5b316115db11f3ee76a533a24398c43d3f105bdcc37e188a4c7df1"} Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.685660 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" event={"ID":"72d21973-62df-4217-880d-04c600804b8d","Type":"ContainerStarted","Data":"7ac87d6ff5e2024d4cde246b518dfac54ee9ba5f1bfa1ddf18a23782a8e4b1cb"} Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.685764 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.687282 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" event={"ID":"1cd7d43a-b82c-423b-ac88-ac99d9b753aa","Type":"ContainerStarted","Data":"ae8d34e65f7fc1414d726b3045faa6f1a80ec38071cf6322959203f4c58c2bac"} Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.687551 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:38:01 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:38:01 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:38:01 crc kubenswrapper[4881]: healthz check failed Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.687611 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.688809 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" event={"ID":"9b2b7357-ae8b-474d-942e-56c296ace395","Type":"ContainerStarted","Data":"5fa1b9d996be4112c496b97062875d03d9819a6797d49b3f0b4a7fe3f767322e"} Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.690094 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xbdxs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.690111 4881 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2vlbm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.690137 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" podUID="9df00c6a-36e4-454c-9bd0-bf7be360fedf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.690153 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" podUID="7742150d-7cf8-487e-a375-a39ce7caa256" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.690118 4881 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dklx7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.690196 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" podUID="62f5dd6c-c6a2-4e70-ba5b-727c20098526" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.690046 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-drv9q container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.690505 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" podUID="7157505d-d18a-42a4-8037-96ad9a7825ce" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.718683 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9cpqm" podStartSLOduration=133.718658185 podStartE2EDuration="2m13.718658185s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:01.716350469 +0000 UTC m=+154.195660495" watchObservedRunningTime="2026-01-26 12:38:01.718658185 +0000 UTC m=+154.197968211" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.742981 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" podStartSLOduration=133.742963562 podStartE2EDuration="2m13.742963562s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:01.742658805 +0000 UTC m=+154.221968831" watchObservedRunningTime="2026-01-26 12:38:01.742963562 +0000 UTC m=+154.222273588" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.770959 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.772255 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.272236758 +0000 UTC m=+154.751546784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.784400 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nqv96" podStartSLOduration=133.784379936 podStartE2EDuration="2m13.784379936s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:01.782696977 +0000 UTC m=+154.262007013" watchObservedRunningTime="2026-01-26 12:38:01.784379936 +0000 UTC m=+154.263689962" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.818674 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nlqwj" podStartSLOduration=133.81865903 podStartE2EDuration="2m13.81865903s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:01.817566545 +0000 UTC m=+154.296876581" watchObservedRunningTime="2026-01-26 12:38:01.81865903 +0000 UTC m=+154.297969056" Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.873511 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.873899 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.373884353 +0000 UTC m=+154.853194379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:01 crc kubenswrapper[4881]: I0126 12:38:01.974408 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:01 crc kubenswrapper[4881]: E0126 12:38:01.974723 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.474708479 +0000 UTC m=+154.954018495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.075807 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.076209 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.57619326 +0000 UTC m=+155.055503286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.117195 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.129395 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-b8l9s" Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.147605 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vw8v9" podStartSLOduration=134.147590566 podStartE2EDuration="2m14.147590566s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:01.848640313 +0000 UTC m=+154.327950339" watchObservedRunningTime="2026-01-26 12:38:02.147590566 +0000 UTC m=+154.626900582" Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.176451 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.176664 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.676640847 +0000 UTC m=+155.155950873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.176795 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.177133 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.677116538 +0000 UTC m=+155.156426564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.278289 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.278464 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.778431596 +0000 UTC m=+155.257741622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.278886 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.279327 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.779306836 +0000 UTC m=+155.258616892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.380260 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.380425 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.880402829 +0000 UTC m=+155.359712855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.380535 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.380873 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.880866309 +0000 UTC m=+155.360176335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.482056 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.482247 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.982223197 +0000 UTC m=+155.461533223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.482384 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.482705 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:02.982690839 +0000 UTC m=+155.462000865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.583649 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.583798 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.083777061 +0000 UTC m=+155.563087097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.583997 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.584256 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.084246932 +0000 UTC m=+155.563556958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.685133 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.685314 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.185290062 +0000 UTC m=+155.664600088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.685547 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.685827 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.185819896 +0000 UTC m=+155.665129922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.688871 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:38:02 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:38:02 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:38:02 crc kubenswrapper[4881]: healthz check failed Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.688951 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.694081 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-972wv" event={"ID":"ed9c86f6-30f0-43fc-87fb-d3497d8a8357","Type":"ContainerStarted","Data":"966442a32b280f70bf3e5d511b983b26041022d2c3aa3136925deb2ecfe0ce13"} Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.711084 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2vlbm" Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.786428 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.786673 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.286644851 +0000 UTC m=+155.765954877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.787071 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.787826 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.287815649 +0000 UTC m=+155.767125675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.888408 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.888667 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.388631925 +0000 UTC m=+155.867941961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.888788 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.889120 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.389107446 +0000 UTC m=+155.868417472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.989968 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.990213 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.490180467 +0000 UTC m=+155.969490493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:02 crc kubenswrapper[4881]: I0126 12:38:02.990451 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:02 crc kubenswrapper[4881]: E0126 12:38:02.990785 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.490773321 +0000 UTC m=+155.970083337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.091655 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:03 crc kubenswrapper[4881]: E0126 12:38:03.091840 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.591813702 +0000 UTC m=+156.071123728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.092248 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:03 crc kubenswrapper[4881]: E0126 12:38:03.092623 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.592607851 +0000 UTC m=+156.071917877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.192793 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:03 crc kubenswrapper[4881]: E0126 12:38:03.192924 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.692906714 +0000 UTC m=+156.172216740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.193152 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:03 crc kubenswrapper[4881]: E0126 12:38:03.193448 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.693440467 +0000 UTC m=+156.172750493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dr6gf" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.197821 4881 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.294342 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:03 crc kubenswrapper[4881]: E0126 12:38:03.294631 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:38:03.79460625 +0000 UTC m=+156.273916276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.328314 4881 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T12:38:03.197842581Z","Handler":null,"Name":""} Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.352451 4881 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.352487 4881 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.395887 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.399446 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.399493 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.449012 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dr6gf\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.496822 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.496984 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.497734 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gtzx7" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.510956 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.608956 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qqnhh"] Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.611251 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.614153 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.625021 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqnhh"] Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.688491 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:38:03 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:38:03 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:38:03 crc kubenswrapper[4881]: healthz check failed Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.688575 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.701106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-catalog-content\") pod \"certified-operators-qqnhh\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.701170 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-utilities\") pod \"certified-operators-qqnhh\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.701237 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njvsv\" (UniqueName: \"kubernetes.io/projected/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-kube-api-access-njvsv\") pod \"certified-operators-qqnhh\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.702908 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-972wv" event={"ID":"ed9c86f6-30f0-43fc-87fb-d3497d8a8357","Type":"ContainerStarted","Data":"0d09b4e05b72b99048c58362b1a85d26ce63aff691d59cb05dde15958ca6527f"} Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.702947 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-972wv" event={"ID":"ed9c86f6-30f0-43fc-87fb-d3497d8a8357","Type":"ContainerStarted","Data":"bbdc6eec25e51fe74c5f869bcd7f92a554b465ef90d6a2b8216c1b5999ef3ba9"} Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.727923 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-972wv" podStartSLOduration=12.727899546 podStartE2EDuration="12.727899546s" podCreationTimestamp="2026-01-26 12:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:03.72260267 +0000 UTC m=+156.201912696" watchObservedRunningTime="2026-01-26 12:38:03.727899546 +0000 UTC m=+156.207209572" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.760232 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dr6gf"] Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.802306 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-catalog-content\") pod \"certified-operators-qqnhh\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.802665 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-utilities\") pod \"certified-operators-qqnhh\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.802706 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njvsv\" (UniqueName: \"kubernetes.io/projected/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-kube-api-access-njvsv\") pod \"certified-operators-qqnhh\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.804321 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-catalog-content\") pod \"certified-operators-qqnhh\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.804937 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-utilities\") pod \"certified-operators-qqnhh\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.805476 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lxbt8"] Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.808486 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.815112 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.818052 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxbt8"] Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.826819 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njvsv\" (UniqueName: \"kubernetes.io/projected/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-kube-api-access-njvsv\") pod \"certified-operators-qqnhh\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.904536 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-catalog-content\") pod \"community-operators-lxbt8\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.904683 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-utilities\") pod \"community-operators-lxbt8\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.904776 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknp5\" (UniqueName: \"kubernetes.io/projected/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-kube-api-access-hknp5\") pod \"community-operators-lxbt8\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:03 crc kubenswrapper[4881]: I0126 12:38:03.936399 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.001009 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-clfp2"] Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.002036 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.005313 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-utilities\") pod \"community-operators-lxbt8\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.005391 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hknp5\" (UniqueName: \"kubernetes.io/projected/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-kube-api-access-hknp5\") pod \"community-operators-lxbt8\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.005416 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-catalog-content\") pod \"community-operators-lxbt8\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.005850 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-catalog-content\") pod \"community-operators-lxbt8\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.005916 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-utilities\") pod \"community-operators-lxbt8\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.016214 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-clfp2"] Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.037235 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hknp5\" (UniqueName: \"kubernetes.io/projected/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-kube-api-access-hknp5\") pod \"community-operators-lxbt8\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.090414 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.106256 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-utilities\") pod \"certified-operators-clfp2\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.106302 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blpfm\" (UniqueName: \"kubernetes.io/projected/6b1e10a4-4d08-4638-ac73-10c521806268-kube-api-access-blpfm\") pod \"certified-operators-clfp2\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.106349 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-catalog-content\") pod \"certified-operators-clfp2\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.143393 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.207308 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-catalog-content\") pod \"certified-operators-clfp2\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.207417 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-utilities\") pod \"certified-operators-clfp2\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.207446 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blpfm\" (UniqueName: \"kubernetes.io/projected/6b1e10a4-4d08-4638-ac73-10c521806268-kube-api-access-blpfm\") pod \"certified-operators-clfp2\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.208342 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-catalog-content\") pod \"certified-operators-clfp2\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.208501 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-utilities\") pod \"certified-operators-clfp2\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.211331 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xl7pw"] Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.212312 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.262160 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blpfm\" (UniqueName: \"kubernetes.io/projected/6b1e10a4-4d08-4638-ac73-10c521806268-kube-api-access-blpfm\") pod \"certified-operators-clfp2\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.308248 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55l5k\" (UniqueName: \"kubernetes.io/projected/01f8afd2-ae02-4313-b007-d61725d9df50-kube-api-access-55l5k\") pod \"community-operators-xl7pw\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.308294 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-utilities\") pod \"community-operators-xl7pw\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.308318 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-catalog-content\") pod \"community-operators-xl7pw\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.318654 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xl7pw"] Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.320079 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.406377 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqnhh"] Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.409132 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55l5k\" (UniqueName: \"kubernetes.io/projected/01f8afd2-ae02-4313-b007-d61725d9df50-kube-api-access-55l5k\") pod \"community-operators-xl7pw\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.409159 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-utilities\") pod \"community-operators-xl7pw\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.409183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-catalog-content\") pod \"community-operators-xl7pw\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.409890 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-catalog-content\") pod \"community-operators-xl7pw\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.409921 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-utilities\") pod \"community-operators-xl7pw\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.441350 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55l5k\" (UniqueName: \"kubernetes.io/projected/01f8afd2-ae02-4313-b007-d61725d9df50-kube-api-access-55l5k\") pod \"community-operators-xl7pw\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.514273 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxbt8"] Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.516924 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-68j9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.516943 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-68j9g container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.516966 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-68j9g" podUID="30c09cc7-c747-494b-80ac-e4a780f65fb6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.517020 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-68j9g" podUID="30c09cc7-c747-494b-80ac-e4a780f65fb6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.537671 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.583655 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-clfp2"] Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.684272 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.687673 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:38:04 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:38:04 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:38:04 crc kubenswrapper[4881]: healthz check failed Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.687776 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.696301 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.723976 4881 generic.go:334] "Generic (PLEG): container finished" podID="3bf4018e-9078-4293-8f8e-f6ab7567943a" containerID="25bb4b559ced09ec8791787a057e14ef76ba787285a6ce95bdbf493b84cdfd08" exitCode=0 Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.724034 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" event={"ID":"3bf4018e-9078-4293-8f8e-f6ab7567943a","Type":"ContainerDied","Data":"25bb4b559ced09ec8791787a057e14ef76ba787285a6ce95bdbf493b84cdfd08"} Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.726236 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqnhh" event={"ID":"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1","Type":"ContainerStarted","Data":"562c117ab29d18dc2139fb6a236d0ccd26b40a690b1e89e5c4be41b1489c8ffe"} Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.727274 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" event={"ID":"b92eec64-c286-4244-9e62-a5cd7ab680ae","Type":"ContainerStarted","Data":"13f18f4170beb7314d9e9fa89f236544a7d706858108b2108994b76731980774"} Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.727297 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" event={"ID":"b92eec64-c286-4244-9e62-a5cd7ab680ae","Type":"ContainerStarted","Data":"017beffecd221a67757f7fb92c13b5580236c43e5d1ddb9b81298b02fbe7e0e2"} Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.730854 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.735932 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbt8" event={"ID":"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e","Type":"ContainerStarted","Data":"1f9bc593b7bd37b48c2c758b54edcb987d11e7a7888c834e60f56fbd51586664"} Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.736854 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clfp2" event={"ID":"6b1e10a4-4d08-4638-ac73-10c521806268","Type":"ContainerStarted","Data":"eecd6e8d21c14de27488e3ddad865424c4c78405996129be0792bab2796bbb51"} Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.758990 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xl7pw"] Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.771963 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" podStartSLOduration=136.771941444 podStartE2EDuration="2m16.771941444s" podCreationTimestamp="2026-01-26 12:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:04.771872892 +0000 UTC m=+157.251182918" watchObservedRunningTime="2026-01-26 12:38:04.771941444 +0000 UTC m=+157.251251470" Jan 26 12:38:04 crc kubenswrapper[4881]: W0126 12:38:04.772630 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01f8afd2_ae02_4313_b007_d61725d9df50.slice/crio-6aa1526d85f25e41c62eb4361e6872d8a656d7f26ee24431cb41f6c58f019332 WatchSource:0}: Error finding container 6aa1526d85f25e41c62eb4361e6872d8a656d7f26ee24431cb41f6c58f019332: Status 404 returned error can't find the container with id 6aa1526d85f25e41c62eb4361e6872d8a656d7f26ee24431cb41f6c58f019332 Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.856695 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.856745 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.858692 4881 patch_prober.go:28] interesting pod/console-f9d7485db-cwb4s container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.858736 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cwb4s" podUID="de07a342-44f0-45cc-a461-5fd5a70e34d9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 26 12:38:04 crc kubenswrapper[4881]: I0126 12:38:04.998397 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2fsrg" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.329387 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xbdxs" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.392189 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dklx7" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.597433 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxw6m"] Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.598597 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.600885 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.610140 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxw6m"] Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.656555 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.657308 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.660232 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.660290 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.669684 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.688054 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:38:05 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:38:05 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:38:05 crc kubenswrapper[4881]: healthz check failed Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.688124 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.728557 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3d919a5e-c4ab-428d-a692-c6c188ed0a2c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.728627 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3d919a5e-c4ab-428d-a692-c6c188ed0a2c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.728652 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-utilities\") pod \"redhat-marketplace-lxw6m\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.728688 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-catalog-content\") pod \"redhat-marketplace-lxw6m\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.728840 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbwfz\" (UniqueName: \"kubernetes.io/projected/67ea8d33-d11e-420e-b566-8d0c2301ce94-kube-api-access-vbwfz\") pod \"redhat-marketplace-lxw6m\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.742750 4881 generic.go:334] "Generic (PLEG): container finished" podID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerID="2e0fe96a4a93d74438f783b72424fe8f2a774d62290fec43775a79bf01808033" exitCode=0 Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.743010 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbt8" event={"ID":"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e","Type":"ContainerDied","Data":"2e0fe96a4a93d74438f783b72424fe8f2a774d62290fec43775a79bf01808033"} Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.744228 4881 generic.go:334] "Generic (PLEG): container finished" podID="6b1e10a4-4d08-4638-ac73-10c521806268" containerID="115a83c77b8be44eb5b53df29f7c22911084b02847330d80f678606c33d2a968" exitCode=0 Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.744286 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clfp2" event={"ID":"6b1e10a4-4d08-4638-ac73-10c521806268","Type":"ContainerDied","Data":"115a83c77b8be44eb5b53df29f7c22911084b02847330d80f678606c33d2a968"} Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.744298 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.748815 4881 generic.go:334] "Generic (PLEG): container finished" podID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerID="8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf" exitCode=0 Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.748881 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqnhh" event={"ID":"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1","Type":"ContainerDied","Data":"8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf"} Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.769080 4881 generic.go:334] "Generic (PLEG): container finished" podID="01f8afd2-ae02-4313-b007-d61725d9df50" containerID="1cd52bea0c0e3ed211d48074a6459e3f03cc2277cdf61d4a316b407d8f1007a7" exitCode=0 Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.769966 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl7pw" event={"ID":"01f8afd2-ae02-4313-b007-d61725d9df50","Type":"ContainerDied","Data":"1cd52bea0c0e3ed211d48074a6459e3f03cc2277cdf61d4a316b407d8f1007a7"} Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.770000 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl7pw" event={"ID":"01f8afd2-ae02-4313-b007-d61725d9df50","Type":"ContainerStarted","Data":"6aa1526d85f25e41c62eb4361e6872d8a656d7f26ee24431cb41f6c58f019332"} Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.830576 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3d919a5e-c4ab-428d-a692-c6c188ed0a2c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.830921 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-utilities\") pod \"redhat-marketplace-lxw6m\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.830960 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-catalog-content\") pod \"redhat-marketplace-lxw6m\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.831032 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbwfz\" (UniqueName: \"kubernetes.io/projected/67ea8d33-d11e-420e-b566-8d0c2301ce94-kube-api-access-vbwfz\") pod \"redhat-marketplace-lxw6m\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.831103 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3d919a5e-c4ab-428d-a692-c6c188ed0a2c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.831643 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-utilities\") pod \"redhat-marketplace-lxw6m\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.831860 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-catalog-content\") pod \"redhat-marketplace-lxw6m\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.831969 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3d919a5e-c4ab-428d-a692-c6c188ed0a2c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.853577 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3d919a5e-c4ab-428d-a692-c6c188ed0a2c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.857292 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbwfz\" (UniqueName: \"kubernetes.io/projected/67ea8d33-d11e-420e-b566-8d0c2301ce94-kube-api-access-vbwfz\") pod \"redhat-marketplace-lxw6m\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.932330 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:38:05 crc kubenswrapper[4881]: I0126 12:38:05.971572 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.001856 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l2k8l"] Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.003606 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.008090 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.019992 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2k8l"] Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.034503 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dktzz\" (UniqueName: \"kubernetes.io/projected/44bbde7b-6970-4cdb-abbb-fffd1326291d-kube-api-access-dktzz\") pod \"redhat-marketplace-l2k8l\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.034573 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-utilities\") pod \"redhat-marketplace-l2k8l\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.034611 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-catalog-content\") pod \"redhat-marketplace-l2k8l\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.131729 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxw6m"] Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.135412 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bf4018e-9078-4293-8f8e-f6ab7567943a-config-volume\") pod \"3bf4018e-9078-4293-8f8e-f6ab7567943a\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.135466 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bf4018e-9078-4293-8f8e-f6ab7567943a-secret-volume\") pod \"3bf4018e-9078-4293-8f8e-f6ab7567943a\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.135527 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc866\" (UniqueName: \"kubernetes.io/projected/3bf4018e-9078-4293-8f8e-f6ab7567943a-kube-api-access-zc866\") pod \"3bf4018e-9078-4293-8f8e-f6ab7567943a\" (UID: \"3bf4018e-9078-4293-8f8e-f6ab7567943a\") " Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.135707 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-catalog-content\") pod \"redhat-marketplace-l2k8l\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.135785 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dktzz\" (UniqueName: \"kubernetes.io/projected/44bbde7b-6970-4cdb-abbb-fffd1326291d-kube-api-access-dktzz\") pod \"redhat-marketplace-l2k8l\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.135834 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-utilities\") pod \"redhat-marketplace-l2k8l\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.136264 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf4018e-9078-4293-8f8e-f6ab7567943a-config-volume" (OuterVolumeSpecName: "config-volume") pod "3bf4018e-9078-4293-8f8e-f6ab7567943a" (UID: "3bf4018e-9078-4293-8f8e-f6ab7567943a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.137778 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-catalog-content\") pod \"redhat-marketplace-l2k8l\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.137810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-utilities\") pod \"redhat-marketplace-l2k8l\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.141732 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf4018e-9078-4293-8f8e-f6ab7567943a-kube-api-access-zc866" (OuterVolumeSpecName: "kube-api-access-zc866") pod "3bf4018e-9078-4293-8f8e-f6ab7567943a" (UID: "3bf4018e-9078-4293-8f8e-f6ab7567943a"). InnerVolumeSpecName "kube-api-access-zc866". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.142029 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf4018e-9078-4293-8f8e-f6ab7567943a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3bf4018e-9078-4293-8f8e-f6ab7567943a" (UID: "3bf4018e-9078-4293-8f8e-f6ab7567943a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.156418 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dktzz\" (UniqueName: \"kubernetes.io/projected/44bbde7b-6970-4cdb-abbb-fffd1326291d-kube-api-access-dktzz\") pod \"redhat-marketplace-l2k8l\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.196245 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 12:38:06 crc kubenswrapper[4881]: W0126 12:38:06.208770 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3d919a5e_c4ab_428d_a692_c6c188ed0a2c.slice/crio-28167a6a8d10af9892006355fd927dbe1ba625202dee7d86f9578f78ef9709b6 WatchSource:0}: Error finding container 28167a6a8d10af9892006355fd927dbe1ba625202dee7d86f9578f78ef9709b6: Status 404 returned error can't find the container with id 28167a6a8d10af9892006355fd927dbe1ba625202dee7d86f9578f78ef9709b6 Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.237236 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bf4018e-9078-4293-8f8e-f6ab7567943a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.237276 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bf4018e-9078-4293-8f8e-f6ab7567943a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.237288 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc866\" (UniqueName: \"kubernetes.io/projected/3bf4018e-9078-4293-8f8e-f6ab7567943a-kube-api-access-zc866\") on node \"crc\" DevicePath \"\"" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.363960 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.375876 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 12:38:06 crc kubenswrapper[4881]: E0126 12:38:06.376191 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf4018e-9078-4293-8f8e-f6ab7567943a" containerName="collect-profiles" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.376232 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf4018e-9078-4293-8f8e-f6ab7567943a" containerName="collect-profiles" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.376607 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf4018e-9078-4293-8f8e-f6ab7567943a" containerName="collect-profiles" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.377226 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.377315 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.386027 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.386507 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.439670 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef989784-bddd-4a6a-9897-07ca8768d2d4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ef989784-bddd-4a6a-9897-07ca8768d2d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.441998 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef989784-bddd-4a6a-9897-07ca8768d2d4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ef989784-bddd-4a6a-9897-07ca8768d2d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.543907 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef989784-bddd-4a6a-9897-07ca8768d2d4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ef989784-bddd-4a6a-9897-07ca8768d2d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.543981 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef989784-bddd-4a6a-9897-07ca8768d2d4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ef989784-bddd-4a6a-9897-07ca8768d2d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.544128 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef989784-bddd-4a6a-9897-07ca8768d2d4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ef989784-bddd-4a6a-9897-07ca8768d2d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.568367 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef989784-bddd-4a6a-9897-07ca8768d2d4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ef989784-bddd-4a6a-9897-07ca8768d2d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.599758 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2k8l"] Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.687570 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:38:06 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 26 12:38:06 crc kubenswrapper[4881]: [+]process-running ok Jan 26 12:38:06 crc kubenswrapper[4881]: healthz check failed Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.687638 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.744255 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.774871 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" event={"ID":"3bf4018e-9078-4293-8f8e-f6ab7567943a","Type":"ContainerDied","Data":"6071074a8d15a1df9ea0a1b0161e1f98baf676354d80109d36d61a4203d2f419"} Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.775155 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6071074a8d15a1df9ea0a1b0161e1f98baf676354d80109d36d61a4203d2f419" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.774921 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.775595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2k8l" event={"ID":"44bbde7b-6970-4cdb-abbb-fffd1326291d","Type":"ContainerStarted","Data":"4c1bfa9342d1f618d26709f9472d82520ceaad1a56979b3ad56a3f63be08213b"} Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.776194 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3d919a5e-c4ab-428d-a692-c6c188ed0a2c","Type":"ContainerStarted","Data":"28167a6a8d10af9892006355fd927dbe1ba625202dee7d86f9578f78ef9709b6"} Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.778006 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxw6m" event={"ID":"67ea8d33-d11e-420e-b566-8d0c2301ce94","Type":"ContainerStarted","Data":"f94d3a2ed73a1c5ed3e1660606b5ebdae83d64d417ef051b560e400107e1c7db"} Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.778035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxw6m" event={"ID":"67ea8d33-d11e-420e-b566-8d0c2301ce94","Type":"ContainerStarted","Data":"8883025e3ed785dfbdd7bfa72c310e8452ce1cd4814e9c6051ffe82e2761e988"} Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.796959 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v4z92"] Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.797955 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.801191 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.811492 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4z92"] Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.979541 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn88g\" (UniqueName: \"kubernetes.io/projected/7c91a464-e748-4f02-9aab-d89a0076cb8d-kube-api-access-cn88g\") pod \"redhat-operators-v4z92\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.979631 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-utilities\") pod \"redhat-operators-v4z92\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:06 crc kubenswrapper[4881]: I0126 12:38:06.979687 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-catalog-content\") pod \"redhat-operators-v4z92\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.030266 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 12:38:07 crc kubenswrapper[4881]: W0126 12:38:07.039639 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podef989784_bddd_4a6a_9897_07ca8768d2d4.slice/crio-9b93bd5be8ba074a20190071846a41e32b2de0f78cc017343f15bc8d2255941c WatchSource:0}: Error finding container 9b93bd5be8ba074a20190071846a41e32b2de0f78cc017343f15bc8d2255941c: Status 404 returned error can't find the container with id 9b93bd5be8ba074a20190071846a41e32b2de0f78cc017343f15bc8d2255941c Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.080828 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-utilities\") pod \"redhat-operators-v4z92\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.080886 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-catalog-content\") pod \"redhat-operators-v4z92\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.080957 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn88g\" (UniqueName: \"kubernetes.io/projected/7c91a464-e748-4f02-9aab-d89a0076cb8d-kube-api-access-cn88g\") pod \"redhat-operators-v4z92\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.081278 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-utilities\") pod \"redhat-operators-v4z92\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.081509 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-catalog-content\") pod \"redhat-operators-v4z92\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.099985 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn88g\" (UniqueName: \"kubernetes.io/projected/7c91a464-e748-4f02-9aab-d89a0076cb8d-kube-api-access-cn88g\") pod \"redhat-operators-v4z92\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.202354 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pvt2d"] Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.203673 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.215086 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvt2d"] Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.221209 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.388285 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-catalog-content\") pod \"redhat-operators-pvt2d\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.388702 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76dzm\" (UniqueName: \"kubernetes.io/projected/eebbd99a-494e-4431-91b4-92272880b04b-kube-api-access-76dzm\") pod \"redhat-operators-pvt2d\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.389040 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-utilities\") pod \"redhat-operators-pvt2d\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.419490 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4z92"] Jan 26 12:38:07 crc kubenswrapper[4881]: W0126 12:38:07.436708 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c91a464_e748_4f02_9aab_d89a0076cb8d.slice/crio-0746afa2dae4e28a1c3e0283b2a9515b0d4c16d4ca0f65ba89bce39a591cee58 WatchSource:0}: Error finding container 0746afa2dae4e28a1c3e0283b2a9515b0d4c16d4ca0f65ba89bce39a591cee58: Status 404 returned error can't find the container with id 0746afa2dae4e28a1c3e0283b2a9515b0d4c16d4ca0f65ba89bce39a591cee58 Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.490491 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-catalog-content\") pod \"redhat-operators-pvt2d\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.490654 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76dzm\" (UniqueName: \"kubernetes.io/projected/eebbd99a-494e-4431-91b4-92272880b04b-kube-api-access-76dzm\") pod \"redhat-operators-pvt2d\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.490700 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-utilities\") pod \"redhat-operators-pvt2d\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.491283 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-catalog-content\") pod \"redhat-operators-pvt2d\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.492463 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-utilities\") pod \"redhat-operators-pvt2d\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.510938 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76dzm\" (UniqueName: \"kubernetes.io/projected/eebbd99a-494e-4431-91b4-92272880b04b-kube-api-access-76dzm\") pod \"redhat-operators-pvt2d\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.527465 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.705260 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.712931 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-c4vbw" Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.790755 4881 generic.go:334] "Generic (PLEG): container finished" podID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerID="f7e27a006421a109fe178e077b1fedd0cda4f6a4060976e19632733927205752" exitCode=0 Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.790833 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2k8l" event={"ID":"44bbde7b-6970-4cdb-abbb-fffd1326291d","Type":"ContainerDied","Data":"f7e27a006421a109fe178e077b1fedd0cda4f6a4060976e19632733927205752"} Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.798038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ef989784-bddd-4a6a-9897-07ca8768d2d4","Type":"ContainerStarted","Data":"56a7743d1e02087db6aca1edf9df083f2f3f22099d81dc5882763f436f8ab2cb"} Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.798086 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ef989784-bddd-4a6a-9897-07ca8768d2d4","Type":"ContainerStarted","Data":"9b93bd5be8ba074a20190071846a41e32b2de0f78cc017343f15bc8d2255941c"} Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.800191 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d919a5e-c4ab-428d-a692-c6c188ed0a2c" containerID="ca48848d1eaa34196b5487ac90ae5c042b6466dceb32a1bc58d7994e3bbac39f" exitCode=0 Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.800312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3d919a5e-c4ab-428d-a692-c6c188ed0a2c","Type":"ContainerDied","Data":"ca48848d1eaa34196b5487ac90ae5c042b6466dceb32a1bc58d7994e3bbac39f"} Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.811819 4881 generic.go:334] "Generic (PLEG): container finished" podID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerID="f94d3a2ed73a1c5ed3e1660606b5ebdae83d64d417ef051b560e400107e1c7db" exitCode=0 Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.811922 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxw6m" event={"ID":"67ea8d33-d11e-420e-b566-8d0c2301ce94","Type":"ContainerDied","Data":"f94d3a2ed73a1c5ed3e1660606b5ebdae83d64d417ef051b560e400107e1c7db"} Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.837839 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4z92" event={"ID":"7c91a464-e748-4f02-9aab-d89a0076cb8d","Type":"ContainerStarted","Data":"0746afa2dae4e28a1c3e0283b2a9515b0d4c16d4ca0f65ba89bce39a591cee58"} Jan 26 12:38:07 crc kubenswrapper[4881]: I0126 12:38:07.857334 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.857314974 podStartE2EDuration="1.857314974s" podCreationTimestamp="2026-01-26 12:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:07.8558582 +0000 UTC m=+160.335168236" watchObservedRunningTime="2026-01-26 12:38:07.857314974 +0000 UTC m=+160.336625010" Jan 26 12:38:08 crc kubenswrapper[4881]: I0126 12:38:08.070856 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvt2d"] Jan 26 12:38:08 crc kubenswrapper[4881]: W0126 12:38:08.089925 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeebbd99a_494e_4431_91b4_92272880b04b.slice/crio-5b03b2a2aafc6723e5cf50713636e70a3076121ae3732f671ec7a27f23de1bee WatchSource:0}: Error finding container 5b03b2a2aafc6723e5cf50713636e70a3076121ae3732f671ec7a27f23de1bee: Status 404 returned error can't find the container with id 5b03b2a2aafc6723e5cf50713636e70a3076121ae3732f671ec7a27f23de1bee Jan 26 12:38:08 crc kubenswrapper[4881]: I0126 12:38:08.852830 4881 generic.go:334] "Generic (PLEG): container finished" podID="ef989784-bddd-4a6a-9897-07ca8768d2d4" containerID="56a7743d1e02087db6aca1edf9df083f2f3f22099d81dc5882763f436f8ab2cb" exitCode=0 Jan 26 12:38:08 crc kubenswrapper[4881]: I0126 12:38:08.854566 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ef989784-bddd-4a6a-9897-07ca8768d2d4","Type":"ContainerDied","Data":"56a7743d1e02087db6aca1edf9df083f2f3f22099d81dc5882763f436f8ab2cb"} Jan 26 12:38:08 crc kubenswrapper[4881]: I0126 12:38:08.867811 4881 generic.go:334] "Generic (PLEG): container finished" podID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerID="847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148" exitCode=0 Jan 26 12:38:08 crc kubenswrapper[4881]: I0126 12:38:08.867918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4z92" event={"ID":"7c91a464-e748-4f02-9aab-d89a0076cb8d","Type":"ContainerDied","Data":"847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148"} Jan 26 12:38:08 crc kubenswrapper[4881]: I0126 12:38:08.872498 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvt2d" event={"ID":"eebbd99a-494e-4431-91b4-92272880b04b","Type":"ContainerStarted","Data":"5b03b2a2aafc6723e5cf50713636e70a3076121ae3732f671ec7a27f23de1bee"} Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.236639 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.245435 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kube-api-access\") pod \"3d919a5e-c4ab-428d-a692-c6c188ed0a2c\" (UID: \"3d919a5e-c4ab-428d-a692-c6c188ed0a2c\") " Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.245512 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kubelet-dir\") pod \"3d919a5e-c4ab-428d-a692-c6c188ed0a2c\" (UID: \"3d919a5e-c4ab-428d-a692-c6c188ed0a2c\") " Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.245793 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3d919a5e-c4ab-428d-a692-c6c188ed0a2c" (UID: "3d919a5e-c4ab-428d-a692-c6c188ed0a2c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.253734 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d919a5e-c4ab-428d-a692-c6c188ed0a2c" (UID: "3d919a5e-c4ab-428d-a692-c6c188ed0a2c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:38:09 crc kubenswrapper[4881]: E0126 12:38:09.276998 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeebbd99a_494e_4431_91b4_92272880b04b.slice/crio-conmon-7ce00d70e9daaf30f20444e46105a0eb34dedfa13a272b24a55d9107bfa492c7.scope\": RecentStats: unable to find data in memory cache]" Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.347045 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.347082 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d919a5e-c4ab-428d-a692-c6c188ed0a2c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.686940 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-b5l77" Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.890864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3d919a5e-c4ab-428d-a692-c6c188ed0a2c","Type":"ContainerDied","Data":"28167a6a8d10af9892006355fd927dbe1ba625202dee7d86f9578f78ef9709b6"} Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.890883 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.891205 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28167a6a8d10af9892006355fd927dbe1ba625202dee7d86f9578f78ef9709b6" Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.894365 4881 generic.go:334] "Generic (PLEG): container finished" podID="eebbd99a-494e-4431-91b4-92272880b04b" containerID="7ce00d70e9daaf30f20444e46105a0eb34dedfa13a272b24a55d9107bfa492c7" exitCode=0 Jan 26 12:38:09 crc kubenswrapper[4881]: I0126 12:38:09.894450 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvt2d" event={"ID":"eebbd99a-494e-4431-91b4-92272880b04b","Type":"ContainerDied","Data":"7ce00d70e9daaf30f20444e46105a0eb34dedfa13a272b24a55d9107bfa492c7"} Jan 26 12:38:10 crc kubenswrapper[4881]: I0126 12:38:10.139774 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:38:10 crc kubenswrapper[4881]: I0126 12:38:10.259102 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef989784-bddd-4a6a-9897-07ca8768d2d4-kube-api-access\") pod \"ef989784-bddd-4a6a-9897-07ca8768d2d4\" (UID: \"ef989784-bddd-4a6a-9897-07ca8768d2d4\") " Jan 26 12:38:10 crc kubenswrapper[4881]: I0126 12:38:10.259261 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef989784-bddd-4a6a-9897-07ca8768d2d4-kubelet-dir\") pod \"ef989784-bddd-4a6a-9897-07ca8768d2d4\" (UID: \"ef989784-bddd-4a6a-9897-07ca8768d2d4\") " Jan 26 12:38:10 crc kubenswrapper[4881]: I0126 12:38:10.259347 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef989784-bddd-4a6a-9897-07ca8768d2d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ef989784-bddd-4a6a-9897-07ca8768d2d4" (UID: "ef989784-bddd-4a6a-9897-07ca8768d2d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:38:10 crc kubenswrapper[4881]: I0126 12:38:10.259740 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef989784-bddd-4a6a-9897-07ca8768d2d4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:38:10 crc kubenswrapper[4881]: I0126 12:38:10.265087 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef989784-bddd-4a6a-9897-07ca8768d2d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ef989784-bddd-4a6a-9897-07ca8768d2d4" (UID: "ef989784-bddd-4a6a-9897-07ca8768d2d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:38:10 crc kubenswrapper[4881]: I0126 12:38:10.367273 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef989784-bddd-4a6a-9897-07ca8768d2d4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:38:10 crc kubenswrapper[4881]: I0126 12:38:10.906291 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:38:10 crc kubenswrapper[4881]: I0126 12:38:10.907193 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ef989784-bddd-4a6a-9897-07ca8768d2d4","Type":"ContainerDied","Data":"9b93bd5be8ba074a20190071846a41e32b2de0f78cc017343f15bc8d2255941c"} Jan 26 12:38:10 crc kubenswrapper[4881]: I0126 12:38:10.907229 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b93bd5be8ba074a20190071846a41e32b2de0f78cc017343f15bc8d2255941c" Jan 26 12:38:11 crc kubenswrapper[4881]: I0126 12:38:11.081877 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:38:11 crc kubenswrapper[4881]: I0126 12:38:11.088345 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640554c2-37e2-425f-b182-aa9b9d6fa4d8-metrics-certs\") pod \"network-metrics-daemon-5zct6\" (UID: \"640554c2-37e2-425f-b182-aa9b9d6fa4d8\") " pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:38:11 crc kubenswrapper[4881]: I0126 12:38:11.206482 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5zct6" Jan 26 12:38:11 crc kubenswrapper[4881]: I0126 12:38:11.688819 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5zct6"] Jan 26 12:38:11 crc kubenswrapper[4881]: W0126 12:38:11.693558 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod640554c2_37e2_425f_b182_aa9b9d6fa4d8.slice/crio-179088b64d1486a00754629186749cd7725cbe810de68de173108ff05f07105e WatchSource:0}: Error finding container 179088b64d1486a00754629186749cd7725cbe810de68de173108ff05f07105e: Status 404 returned error can't find the container with id 179088b64d1486a00754629186749cd7725cbe810de68de173108ff05f07105e Jan 26 12:38:11 crc kubenswrapper[4881]: I0126 12:38:11.926555 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5zct6" event={"ID":"640554c2-37e2-425f-b182-aa9b9d6fa4d8","Type":"ContainerStarted","Data":"179088b64d1486a00754629186749cd7725cbe810de68de173108ff05f07105e"} Jan 26 12:38:12 crc kubenswrapper[4881]: I0126 12:38:12.946958 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5zct6" event={"ID":"640554c2-37e2-425f-b182-aa9b9d6fa4d8","Type":"ContainerStarted","Data":"ad727d93d2aca83d4141c9c24dcfc6975b86b88a8cdad4361b3ef26df2b91942"} Jan 26 12:38:13 crc kubenswrapper[4881]: I0126 12:38:13.955233 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5zct6" event={"ID":"640554c2-37e2-425f-b182-aa9b9d6fa4d8","Type":"ContainerStarted","Data":"c61be53dc7ef85d0c459810992e3a73b95a862d4998b360cd768c121d9e6b2c2"} Jan 26 12:38:14 crc kubenswrapper[4881]: I0126 12:38:14.548696 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-68j9g" Jan 26 12:38:14 crc kubenswrapper[4881]: I0126 12:38:14.890218 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:38:14 crc kubenswrapper[4881]: I0126 12:38:14.894941 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:38:14 crc kubenswrapper[4881]: I0126 12:38:14.994573 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5zct6" podStartSLOduration=147.994551044 podStartE2EDuration="2m27.994551044s" podCreationTimestamp="2026-01-26 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:38:14.99436868 +0000 UTC m=+167.473678706" watchObservedRunningTime="2026-01-26 12:38:14.994551044 +0000 UTC m=+167.473861070" Jan 26 12:38:23 crc kubenswrapper[4881]: I0126 12:38:23.505136 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:38:24 crc kubenswrapper[4881]: I0126 12:38:24.789952 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:38:24 crc kubenswrapper[4881]: I0126 12:38:24.790092 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:38:25 crc kubenswrapper[4881]: I0126 12:38:25.726144 4881 patch_prober.go:28] interesting pod/router-default-5444994796-c4vbw container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 12:38:25 crc kubenswrapper[4881]: I0126 12:38:25.726201 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-c4vbw" podUID="bdc32e8d-e1c8-4415-9a39-6ddbc4cbca4c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 12:38:34 crc kubenswrapper[4881]: I0126 12:38:34.372395 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:38:35 crc kubenswrapper[4881]: I0126 12:38:35.363852 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk6pz" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.179836 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 12:38:41 crc kubenswrapper[4881]: E0126 12:38:41.181924 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d919a5e-c4ab-428d-a692-c6c188ed0a2c" containerName="pruner" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.181956 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d919a5e-c4ab-428d-a692-c6c188ed0a2c" containerName="pruner" Jan 26 12:38:41 crc kubenswrapper[4881]: E0126 12:38:41.181988 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef989784-bddd-4a6a-9897-07ca8768d2d4" containerName="pruner" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.182001 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef989784-bddd-4a6a-9897-07ca8768d2d4" containerName="pruner" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.182377 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d919a5e-c4ab-428d-a692-c6c188ed0a2c" containerName="pruner" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.182407 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef989784-bddd-4a6a-9897-07ca8768d2d4" containerName="pruner" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.184631 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.193873 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.196887 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.205769 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.295929 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.296046 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.398182 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.398303 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.398443 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.425446 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:38:41 crc kubenswrapper[4881]: I0126 12:38:41.529965 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.356861 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.358817 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.378848 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.461811 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kube-api-access\") pod \"installer-9-crc\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.461895 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.461931 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-var-lock\") pod \"installer-9-crc\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.565586 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kube-api-access\") pod \"installer-9-crc\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.565685 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.565747 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-var-lock\") pod \"installer-9-crc\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.565886 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-var-lock\") pod \"installer-9-crc\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.565951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.593832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kube-api-access\") pod \"installer-9-crc\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:45 crc kubenswrapper[4881]: I0126 12:38:45.728846 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:38:54 crc kubenswrapper[4881]: I0126 12:38:54.789723 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:38:54 crc kubenswrapper[4881]: I0126 12:38:54.790475 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:38:54 crc kubenswrapper[4881]: I0126 12:38:54.790593 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:38:54 crc kubenswrapper[4881]: I0126 12:38:54.791634 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:38:54 crc kubenswrapper[4881]: I0126 12:38:54.791853 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9" gracePeriod=600 Jan 26 12:39:09 crc kubenswrapper[4881]: I0126 12:39:09.329712 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9" exitCode=0 Jan 26 12:39:09 crc kubenswrapper[4881]: I0126 12:39:09.329790 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9"} Jan 26 12:39:16 crc kubenswrapper[4881]: E0126 12:39:16.427611 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686\": context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 12:39:16 crc kubenswrapper[4881]: E0126 12:39:16.428739 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dktzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-l2k8l_openshift-marketplace(44bbde7b-6970-4cdb-abbb-fffd1326291d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686\": context canceled" logger="UnhandledError" Jan 26 12:39:16 crc kubenswrapper[4881]: E0126 12:39:16.430741 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686: Get \\\"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686\\\": context canceled\"" pod="openshift-marketplace/redhat-marketplace-l2k8l" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" Jan 26 12:39:21 crc kubenswrapper[4881]: E0126 12:39:21.883149 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 12:39:21 crc kubenswrapper[4881]: E0126 12:39:21.883766 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blpfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-clfp2_openshift-marketplace(6b1e10a4-4d08-4638-ac73-10c521806268): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:39:21 crc kubenswrapper[4881]: E0126 12:39:21.885147 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-clfp2" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" Jan 26 12:39:21 crc kubenswrapper[4881]: E0126 12:39:21.902709 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 12:39:21 crc kubenswrapper[4881]: E0126 12:39:21.902860 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njvsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qqnhh_openshift-marketplace(85dcb696-76f6-47f5-aaef-12b0ebc2d8c1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:39:21 crc kubenswrapper[4881]: E0126 12:39:21.904043 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qqnhh" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" Jan 26 12:39:25 crc kubenswrapper[4881]: E0126 12:39:25.040878 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-clfp2" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" Jan 26 12:39:25 crc kubenswrapper[4881]: E0126 12:39:25.040893 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qqnhh" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" Jan 26 12:39:25 crc kubenswrapper[4881]: E0126 12:39:25.041050 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-l2k8l" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" Jan 26 12:39:25 crc kubenswrapper[4881]: I0126 12:39:25.278562 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 12:39:25 crc kubenswrapper[4881]: E0126 12:39:25.700623 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 12:39:25 crc kubenswrapper[4881]: E0126 12:39:25.700780 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55l5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xl7pw_openshift-marketplace(01f8afd2-ae02-4313-b007-d61725d9df50): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:39:25 crc kubenswrapper[4881]: E0126 12:39:25.701967 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xl7pw" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" Jan 26 12:39:26 crc kubenswrapper[4881]: E0126 12:39:26.807890 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 12:39:26 crc kubenswrapper[4881]: E0126 12:39:26.808250 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hknp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lxbt8_openshift-marketplace(19d4e6cf-8b9f-45ce-b93a-af4e9957b93e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:39:26 crc kubenswrapper[4881]: E0126 12:39:26.809457 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lxbt8" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" Jan 26 12:39:36 crc kubenswrapper[4881]: E0126 12:39:36.783370 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lxbt8" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" Jan 26 12:39:36 crc kubenswrapper[4881]: E0126 12:39:36.806310 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 12:39:36 crc kubenswrapper[4881]: E0126 12:39:36.806676 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-76dzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pvt2d_openshift-marketplace(eebbd99a-494e-4431-91b4-92272880b04b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:39:36 crc kubenswrapper[4881]: E0126 12:39:36.807972 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pvt2d" podUID="eebbd99a-494e-4431-91b4-92272880b04b" Jan 26 12:39:38 crc kubenswrapper[4881]: E0126 12:39:38.552852 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pvt2d" podUID="eebbd99a-494e-4431-91b4-92272880b04b" Jan 26 12:39:38 crc kubenswrapper[4881]: E0126 12:39:38.558365 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 12:39:38 crc kubenswrapper[4881]: E0126 12:39:38.558491 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vbwfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lxw6m_openshift-marketplace(67ea8d33-d11e-420e-b566-8d0c2301ce94): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:39:38 crc kubenswrapper[4881]: E0126 12:39:38.559950 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lxw6m" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" Jan 26 12:39:38 crc kubenswrapper[4881]: E0126 12:39:38.592545 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 12:39:38 crc kubenswrapper[4881]: E0126 12:39:38.592983 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cn88g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-v4z92_openshift-marketplace(7c91a464-e748-4f02-9aab-d89a0076cb8d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:39:38 crc kubenswrapper[4881]: E0126 12:39:38.597467 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-v4z92" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" Jan 26 12:39:38 crc kubenswrapper[4881]: I0126 12:39:38.929767 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 12:39:38 crc kubenswrapper[4881]: W0126 12:39:38.941171 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc88d870f_ca6d_47d4_b7f3_3ef26315e3b8.slice/crio-5521e4b139da65a31ac745c6086587eae9c0548c33227c56b5a9cea5d1785c73 WatchSource:0}: Error finding container 5521e4b139da65a31ac745c6086587eae9c0548c33227c56b5a9cea5d1785c73: Status 404 returned error can't find the container with id 5521e4b139da65a31ac745c6086587eae9c0548c33227c56b5a9cea5d1785c73 Jan 26 12:39:39 crc kubenswrapper[4881]: I0126 12:39:39.536784 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"c86f563b5fe2c0bb24899b86382a702f023fc4944ac1b02721e98303870ac011"} Jan 26 12:39:39 crc kubenswrapper[4881]: I0126 12:39:39.541180 4881 generic.go:334] "Generic (PLEG): container finished" podID="01f8afd2-ae02-4313-b007-d61725d9df50" containerID="a7d0953f81a6745a589b98f67a59e1ed61d9d7f5cbb5004dcb19d062193627b1" exitCode=0 Jan 26 12:39:39 crc kubenswrapper[4881]: I0126 12:39:39.541221 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl7pw" event={"ID":"01f8afd2-ae02-4313-b007-d61725d9df50","Type":"ContainerDied","Data":"a7d0953f81a6745a589b98f67a59e1ed61d9d7f5cbb5004dcb19d062193627b1"} Jan 26 12:39:39 crc kubenswrapper[4881]: I0126 12:39:39.543471 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3","Type":"ContainerStarted","Data":"7a1cc0246804ef8feffd781f2139170fae689587ff149bdec4bdf27085597a6e"} Jan 26 12:39:39 crc kubenswrapper[4881]: I0126 12:39:39.543501 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3","Type":"ContainerStarted","Data":"a40d541dbeb0fc42670f60537b579b9cd2d57dc3490ac09dae1ee46c966237a2"} Jan 26 12:39:39 crc kubenswrapper[4881]: I0126 12:39:39.545131 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8","Type":"ContainerStarted","Data":"3ac3e07e02bb046dbee0d5e11cdab10a8e97e6bfa94815880c380390b2c33689"} Jan 26 12:39:39 crc kubenswrapper[4881]: I0126 12:39:39.545170 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8","Type":"ContainerStarted","Data":"5521e4b139da65a31ac745c6086587eae9c0548c33227c56b5a9cea5d1785c73"} Jan 26 12:39:39 crc kubenswrapper[4881]: I0126 12:39:39.580833 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=58.580816401 podStartE2EDuration="58.580816401s" podCreationTimestamp="2026-01-26 12:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:39:39.578420954 +0000 UTC m=+252.057730980" watchObservedRunningTime="2026-01-26 12:39:39.580816401 +0000 UTC m=+252.060126427" Jan 26 12:39:39 crc kubenswrapper[4881]: I0126 12:39:39.600626 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=54.600605997 podStartE2EDuration="54.600605997s" podCreationTimestamp="2026-01-26 12:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:39:39.59786047 +0000 UTC m=+252.077170496" watchObservedRunningTime="2026-01-26 12:39:39.600605997 +0000 UTC m=+252.079916033" Jan 26 12:39:39 crc kubenswrapper[4881]: E0126 12:39:39.767233 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v4z92" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" Jan 26 12:39:39 crc kubenswrapper[4881]: E0126 12:39:39.843452 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lxw6m" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" Jan 26 12:39:40 crc kubenswrapper[4881]: I0126 12:39:40.555307 4881 generic.go:334] "Generic (PLEG): container finished" podID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerID="40cbbdf56f127b0cbf235b6e5ba6cbe9171002ee6e58b2ebec651e9995772fd5" exitCode=0 Jan 26 12:39:40 crc kubenswrapper[4881]: I0126 12:39:40.555550 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2k8l" event={"ID":"44bbde7b-6970-4cdb-abbb-fffd1326291d","Type":"ContainerDied","Data":"40cbbdf56f127b0cbf235b6e5ba6cbe9171002ee6e58b2ebec651e9995772fd5"} Jan 26 12:39:40 crc kubenswrapper[4881]: I0126 12:39:40.560915 4881 generic.go:334] "Generic (PLEG): container finished" podID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerID="ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2" exitCode=0 Jan 26 12:39:40 crc kubenswrapper[4881]: I0126 12:39:40.561048 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqnhh" event={"ID":"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1","Type":"ContainerDied","Data":"ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2"} Jan 26 12:39:40 crc kubenswrapper[4881]: I0126 12:39:40.562772 4881 generic.go:334] "Generic (PLEG): container finished" podID="2b3276ee-7c06-4b98-8ffd-5ee9afa280f3" containerID="7a1cc0246804ef8feffd781f2139170fae689587ff149bdec4bdf27085597a6e" exitCode=0 Jan 26 12:39:40 crc kubenswrapper[4881]: I0126 12:39:40.562841 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3","Type":"ContainerDied","Data":"7a1cc0246804ef8feffd781f2139170fae689587ff149bdec4bdf27085597a6e"} Jan 26 12:39:40 crc kubenswrapper[4881]: I0126 12:39:40.571009 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl7pw" event={"ID":"01f8afd2-ae02-4313-b007-d61725d9df50","Type":"ContainerStarted","Data":"05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616"} Jan 26 12:39:40 crc kubenswrapper[4881]: I0126 12:39:40.605283 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xl7pw" podStartSLOduration=2.140135958 podStartE2EDuration="1m36.605267002s" podCreationTimestamp="2026-01-26 12:38:04 +0000 UTC" firstStartedPulling="2026-01-26 12:38:05.772247642 +0000 UTC m=+158.251557668" lastFinishedPulling="2026-01-26 12:39:40.237378686 +0000 UTC m=+252.716688712" observedRunningTime="2026-01-26 12:39:40.600975467 +0000 UTC m=+253.080285493" watchObservedRunningTime="2026-01-26 12:39:40.605267002 +0000 UTC m=+253.084577028" Jan 26 12:39:41 crc kubenswrapper[4881]: I0126 12:39:41.802751 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:39:41 crc kubenswrapper[4881]: I0126 12:39:41.897344 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kubelet-dir\") pod \"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3\" (UID: \"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3\") " Jan 26 12:39:41 crc kubenswrapper[4881]: I0126 12:39:41.897389 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kube-api-access\") pod \"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3\" (UID: \"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3\") " Jan 26 12:39:41 crc kubenswrapper[4881]: I0126 12:39:41.897746 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2b3276ee-7c06-4b98-8ffd-5ee9afa280f3" (UID: "2b3276ee-7c06-4b98-8ffd-5ee9afa280f3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:39:41 crc kubenswrapper[4881]: I0126 12:39:41.905300 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2b3276ee-7c06-4b98-8ffd-5ee9afa280f3" (UID: "2b3276ee-7c06-4b98-8ffd-5ee9afa280f3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:39:41 crc kubenswrapper[4881]: I0126 12:39:41.999282 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:39:41 crc kubenswrapper[4881]: I0126 12:39:41.999330 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b3276ee-7c06-4b98-8ffd-5ee9afa280f3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:39:42 crc kubenswrapper[4881]: I0126 12:39:42.587193 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2k8l" event={"ID":"44bbde7b-6970-4cdb-abbb-fffd1326291d","Type":"ContainerStarted","Data":"da4c8d88349f0c78b93dc311506da35701dbf754e82f602be4b7659f7376ec0e"} Jan 26 12:39:42 crc kubenswrapper[4881]: I0126 12:39:42.588670 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2b3276ee-7c06-4b98-8ffd-5ee9afa280f3","Type":"ContainerDied","Data":"a40d541dbeb0fc42670f60537b579b9cd2d57dc3490ac09dae1ee46c966237a2"} Jan 26 12:39:42 crc kubenswrapper[4881]: I0126 12:39:42.588717 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40d541dbeb0fc42670f60537b579b9cd2d57dc3490ac09dae1ee46c966237a2" Jan 26 12:39:42 crc kubenswrapper[4881]: I0126 12:39:42.588773 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:39:42 crc kubenswrapper[4881]: I0126 12:39:42.609329 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l2k8l" podStartSLOduration=4.201742326 podStartE2EDuration="1m37.609311125s" podCreationTimestamp="2026-01-26 12:38:05 +0000 UTC" firstStartedPulling="2026-01-26 12:38:07.792673829 +0000 UTC m=+160.271983855" lastFinishedPulling="2026-01-26 12:39:41.200242628 +0000 UTC m=+253.679552654" observedRunningTime="2026-01-26 12:39:42.607856959 +0000 UTC m=+255.087166995" watchObservedRunningTime="2026-01-26 12:39:42.609311125 +0000 UTC m=+255.088621161" Jan 26 12:39:43 crc kubenswrapper[4881]: I0126 12:39:43.596558 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqnhh" event={"ID":"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1","Type":"ContainerStarted","Data":"797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030"} Jan 26 12:39:43 crc kubenswrapper[4881]: I0126 12:39:43.598154 4881 generic.go:334] "Generic (PLEG): container finished" podID="6b1e10a4-4d08-4638-ac73-10c521806268" containerID="333564f8ae0c0b64677131978cc823ddb16c8e082155f6f39ae39d58db0a8e37" exitCode=0 Jan 26 12:39:43 crc kubenswrapper[4881]: I0126 12:39:43.598205 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clfp2" event={"ID":"6b1e10a4-4d08-4638-ac73-10c521806268","Type":"ContainerDied","Data":"333564f8ae0c0b64677131978cc823ddb16c8e082155f6f39ae39d58db0a8e37"} Jan 26 12:39:43 crc kubenswrapper[4881]: I0126 12:39:43.620603 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qqnhh" podStartSLOduration=3.879775881 podStartE2EDuration="1m40.620582463s" podCreationTimestamp="2026-01-26 12:38:03 +0000 UTC" firstStartedPulling="2026-01-26 12:38:05.750277919 +0000 UTC m=+158.229587945" lastFinishedPulling="2026-01-26 12:39:42.491084501 +0000 UTC m=+254.970394527" observedRunningTime="2026-01-26 12:39:43.618324537 +0000 UTC m=+256.097634573" watchObservedRunningTime="2026-01-26 12:39:43.620582463 +0000 UTC m=+256.099892489" Jan 26 12:39:43 crc kubenswrapper[4881]: I0126 12:39:43.937198 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:39:43 crc kubenswrapper[4881]: I0126 12:39:43.937250 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:39:44 crc kubenswrapper[4881]: I0126 12:39:44.538709 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:39:44 crc kubenswrapper[4881]: I0126 12:39:44.539136 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:39:44 crc kubenswrapper[4881]: I0126 12:39:44.575313 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:39:44 crc kubenswrapper[4881]: I0126 12:39:44.604864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clfp2" event={"ID":"6b1e10a4-4d08-4638-ac73-10c521806268","Type":"ContainerStarted","Data":"fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26"} Jan 26 12:39:45 crc kubenswrapper[4881]: I0126 12:39:45.147026 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qqnhh" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerName="registry-server" probeResult="failure" output=< Jan 26 12:39:45 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 12:39:45 crc kubenswrapper[4881]: > Jan 26 12:39:45 crc kubenswrapper[4881]: I0126 12:39:45.635361 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-clfp2" podStartSLOduration=4.22062498 podStartE2EDuration="1m42.635342908s" podCreationTimestamp="2026-01-26 12:38:03 +0000 UTC" firstStartedPulling="2026-01-26 12:38:05.746267325 +0000 UTC m=+158.225577351" lastFinishedPulling="2026-01-26 12:39:44.160985253 +0000 UTC m=+256.640295279" observedRunningTime="2026-01-26 12:39:45.63176978 +0000 UTC m=+258.111079806" watchObservedRunningTime="2026-01-26 12:39:45.635342908 +0000 UTC m=+258.114652944" Jan 26 12:39:46 crc kubenswrapper[4881]: I0126 12:39:46.365968 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:39:46 crc kubenswrapper[4881]: I0126 12:39:46.367207 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:39:46 crc kubenswrapper[4881]: I0126 12:39:46.407271 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:39:47 crc kubenswrapper[4881]: I0126 12:39:47.681770 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:39:48 crc kubenswrapper[4881]: I0126 12:39:48.927571 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2k8l"] Jan 26 12:39:49 crc kubenswrapper[4881]: I0126 12:39:49.638789 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l2k8l" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerName="registry-server" containerID="cri-o://da4c8d88349f0c78b93dc311506da35701dbf754e82f602be4b7659f7376ec0e" gracePeriod=2 Jan 26 12:39:52 crc kubenswrapper[4881]: I0126 12:39:52.653998 4881 generic.go:334] "Generic (PLEG): container finished" podID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerID="da4c8d88349f0c78b93dc311506da35701dbf754e82f602be4b7659f7376ec0e" exitCode=0 Jan 26 12:39:52 crc kubenswrapper[4881]: I0126 12:39:52.654043 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2k8l" event={"ID":"44bbde7b-6970-4cdb-abbb-fffd1326291d","Type":"ContainerDied","Data":"da4c8d88349f0c78b93dc311506da35701dbf754e82f602be4b7659f7376ec0e"} Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.186393 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.242971 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-utilities\") pod \"44bbde7b-6970-4cdb-abbb-fffd1326291d\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.243051 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dktzz\" (UniqueName: \"kubernetes.io/projected/44bbde7b-6970-4cdb-abbb-fffd1326291d-kube-api-access-dktzz\") pod \"44bbde7b-6970-4cdb-abbb-fffd1326291d\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.243086 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-catalog-content\") pod \"44bbde7b-6970-4cdb-abbb-fffd1326291d\" (UID: \"44bbde7b-6970-4cdb-abbb-fffd1326291d\") " Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.246129 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-utilities" (OuterVolumeSpecName: "utilities") pod "44bbde7b-6970-4cdb-abbb-fffd1326291d" (UID: "44bbde7b-6970-4cdb-abbb-fffd1326291d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.254668 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bbde7b-6970-4cdb-abbb-fffd1326291d-kube-api-access-dktzz" (OuterVolumeSpecName: "kube-api-access-dktzz") pod "44bbde7b-6970-4cdb-abbb-fffd1326291d" (UID: "44bbde7b-6970-4cdb-abbb-fffd1326291d"). InnerVolumeSpecName "kube-api-access-dktzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.269986 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44bbde7b-6970-4cdb-abbb-fffd1326291d" (UID: "44bbde7b-6970-4cdb-abbb-fffd1326291d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.344068 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.344370 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dktzz\" (UniqueName: \"kubernetes.io/projected/44bbde7b-6970-4cdb-abbb-fffd1326291d-kube-api-access-dktzz\") on node \"crc\" DevicePath \"\"" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.344382 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44bbde7b-6970-4cdb-abbb-fffd1326291d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.666380 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2k8l" event={"ID":"44bbde7b-6970-4cdb-abbb-fffd1326291d","Type":"ContainerDied","Data":"4c1bfa9342d1f618d26709f9472d82520ceaad1a56979b3ad56a3f63be08213b"} Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.666458 4881 scope.go:117] "RemoveContainer" containerID="da4c8d88349f0c78b93dc311506da35701dbf754e82f602be4b7659f7376ec0e" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.666553 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2k8l" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.681407 4881 scope.go:117] "RemoveContainer" containerID="40cbbdf56f127b0cbf235b6e5ba6cbe9171002ee6e58b2ebec651e9995772fd5" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.703814 4881 scope.go:117] "RemoveContainer" containerID="f7e27a006421a109fe178e077b1fedd0cda4f6a4060976e19632733927205752" Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.716550 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2k8l"] Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.716591 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2k8l"] Jan 26 12:39:53 crc kubenswrapper[4881]: I0126 12:39:53.981813 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:39:54 crc kubenswrapper[4881]: I0126 12:39:54.033373 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:39:54 crc kubenswrapper[4881]: I0126 12:39:54.098163 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" path="/var/lib/kubelet/pods/44bbde7b-6970-4cdb-abbb-fffd1326291d/volumes" Jan 26 12:39:54 crc kubenswrapper[4881]: I0126 12:39:54.320848 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:39:54 crc kubenswrapper[4881]: I0126 12:39:54.320901 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:39:54 crc kubenswrapper[4881]: I0126 12:39:54.357332 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:39:54 crc kubenswrapper[4881]: I0126 12:39:54.579381 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:39:54 crc kubenswrapper[4881]: I0126 12:39:54.674784 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbt8" event={"ID":"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e","Type":"ContainerStarted","Data":"a2e9bce8e2fd2eca3bc0284df3a79d36a93851e5d2519d0f85cc8854783de2de"} Jan 26 12:39:54 crc kubenswrapper[4881]: I0126 12:39:54.676313 4881 generic.go:334] "Generic (PLEG): container finished" podID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerID="335d5c0c2174e12f622f9dadd2f95c6e8141ab38b7865174b63c3eacc663d8dd" exitCode=0 Jan 26 12:39:54 crc kubenswrapper[4881]: I0126 12:39:54.676690 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxw6m" event={"ID":"67ea8d33-d11e-420e-b566-8d0c2301ce94","Type":"ContainerDied","Data":"335d5c0c2174e12f622f9dadd2f95c6e8141ab38b7865174b63c3eacc663d8dd"} Jan 26 12:39:54 crc kubenswrapper[4881]: I0126 12:39:54.713584 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:39:55 crc kubenswrapper[4881]: I0126 12:39:55.688294 4881 generic.go:334] "Generic (PLEG): container finished" podID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerID="a2e9bce8e2fd2eca3bc0284df3a79d36a93851e5d2519d0f85cc8854783de2de" exitCode=0 Jan 26 12:39:55 crc kubenswrapper[4881]: I0126 12:39:55.688376 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbt8" event={"ID":"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e","Type":"ContainerDied","Data":"a2e9bce8e2fd2eca3bc0284df3a79d36a93851e5d2519d0f85cc8854783de2de"} Jan 26 12:39:56 crc kubenswrapper[4881]: I0126 12:39:56.724410 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-clfp2"] Jan 26 12:39:56 crc kubenswrapper[4881]: I0126 12:39:56.724711 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-clfp2" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" containerName="registry-server" containerID="cri-o://fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26" gracePeriod=2 Jan 26 12:39:58 crc kubenswrapper[4881]: I0126 12:39:58.707665 4881 generic.go:334] "Generic (PLEG): container finished" podID="6b1e10a4-4d08-4638-ac73-10c521806268" containerID="fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26" exitCode=0 Jan 26 12:39:58 crc kubenswrapper[4881]: I0126 12:39:58.707748 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clfp2" event={"ID":"6b1e10a4-4d08-4638-ac73-10c521806268","Type":"ContainerDied","Data":"fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26"} Jan 26 12:39:59 crc kubenswrapper[4881]: I0126 12:39:59.124254 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xl7pw"] Jan 26 12:39:59 crc kubenswrapper[4881]: I0126 12:39:59.124489 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xl7pw" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" containerName="registry-server" containerID="cri-o://05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616" gracePeriod=2 Jan 26 12:40:00 crc kubenswrapper[4881]: I0126 12:40:00.725607 4881 generic.go:334] "Generic (PLEG): container finished" podID="01f8afd2-ae02-4313-b007-d61725d9df50" containerID="05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616" exitCode=0 Jan 26 12:40:00 crc kubenswrapper[4881]: I0126 12:40:00.725679 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl7pw" event={"ID":"01f8afd2-ae02-4313-b007-d61725d9df50","Type":"ContainerDied","Data":"05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616"} Jan 26 12:40:04 crc kubenswrapper[4881]: E0126 12:40:04.321061 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26 is running failed: container process not found" containerID="fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 12:40:04 crc kubenswrapper[4881]: E0126 12:40:04.322358 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26 is running failed: container process not found" containerID="fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 12:40:04 crc kubenswrapper[4881]: E0126 12:40:04.322848 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26 is running failed: container process not found" containerID="fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 12:40:04 crc kubenswrapper[4881]: E0126 12:40:04.322903 4881 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-clfp2" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" containerName="registry-server" Jan 26 12:40:04 crc kubenswrapper[4881]: E0126 12:40:04.539379 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616 is running failed: container process not found" containerID="05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 12:40:04 crc kubenswrapper[4881]: E0126 12:40:04.539772 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616 is running failed: container process not found" containerID="05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 12:40:04 crc kubenswrapper[4881]: E0126 12:40:04.540393 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616 is running failed: container process not found" containerID="05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 12:40:04 crc kubenswrapper[4881]: E0126 12:40:04.540468 4881 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-xl7pw" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" containerName="registry-server" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.413158 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.574691 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blpfm\" (UniqueName: \"kubernetes.io/projected/6b1e10a4-4d08-4638-ac73-10c521806268-kube-api-access-blpfm\") pod \"6b1e10a4-4d08-4638-ac73-10c521806268\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.574792 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-utilities\") pod \"6b1e10a4-4d08-4638-ac73-10c521806268\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.574911 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-catalog-content\") pod \"6b1e10a4-4d08-4638-ac73-10c521806268\" (UID: \"6b1e10a4-4d08-4638-ac73-10c521806268\") " Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.575734 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-utilities" (OuterVolumeSpecName: "utilities") pod "6b1e10a4-4d08-4638-ac73-10c521806268" (UID: "6b1e10a4-4d08-4638-ac73-10c521806268"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.583619 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b1e10a4-4d08-4638-ac73-10c521806268-kube-api-access-blpfm" (OuterVolumeSpecName: "kube-api-access-blpfm") pod "6b1e10a4-4d08-4638-ac73-10c521806268" (UID: "6b1e10a4-4d08-4638-ac73-10c521806268"). InnerVolumeSpecName "kube-api-access-blpfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.640796 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b1e10a4-4d08-4638-ac73-10c521806268" (UID: "6b1e10a4-4d08-4638-ac73-10c521806268"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.676373 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.676404 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blpfm\" (UniqueName: \"kubernetes.io/projected/6b1e10a4-4d08-4638-ac73-10c521806268-kube-api-access-blpfm\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.676416 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1e10a4-4d08-4638-ac73-10c521806268-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.753154 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.766936 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-clfp2" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.766927 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clfp2" event={"ID":"6b1e10a4-4d08-4638-ac73-10c521806268","Type":"ContainerDied","Data":"eecd6e8d21c14de27488e3ddad865424c4c78405996129be0792bab2796bbb51"} Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.767121 4881 scope.go:117] "RemoveContainer" containerID="fff2c56de6d87056d43edf21c53b5ca4882decb237c106b45811baf84fabbe26" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.769116 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl7pw" event={"ID":"01f8afd2-ae02-4313-b007-d61725d9df50","Type":"ContainerDied","Data":"6aa1526d85f25e41c62eb4361e6872d8a656d7f26ee24431cb41f6c58f019332"} Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.769193 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xl7pw" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.791368 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-clfp2"] Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.794442 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-clfp2"] Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.878991 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-catalog-content\") pod \"01f8afd2-ae02-4313-b007-d61725d9df50\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.879131 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-utilities\") pod \"01f8afd2-ae02-4313-b007-d61725d9df50\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.879187 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55l5k\" (UniqueName: \"kubernetes.io/projected/01f8afd2-ae02-4313-b007-d61725d9df50-kube-api-access-55l5k\") pod \"01f8afd2-ae02-4313-b007-d61725d9df50\" (UID: \"01f8afd2-ae02-4313-b007-d61725d9df50\") " Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.880693 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-utilities" (OuterVolumeSpecName: "utilities") pod "01f8afd2-ae02-4313-b007-d61725d9df50" (UID: "01f8afd2-ae02-4313-b007-d61725d9df50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.881778 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f8afd2-ae02-4313-b007-d61725d9df50-kube-api-access-55l5k" (OuterVolumeSpecName: "kube-api-access-55l5k") pod "01f8afd2-ae02-4313-b007-d61725d9df50" (UID: "01f8afd2-ae02-4313-b007-d61725d9df50"). InnerVolumeSpecName "kube-api-access-55l5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.924689 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01f8afd2-ae02-4313-b007-d61725d9df50" (UID: "01f8afd2-ae02-4313-b007-d61725d9df50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.981150 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.981198 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55l5k\" (UniqueName: \"kubernetes.io/projected/01f8afd2-ae02-4313-b007-d61725d9df50-kube-api-access-55l5k\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:07 crc kubenswrapper[4881]: I0126 12:40:07.981225 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f8afd2-ae02-4313-b007-d61725d9df50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:08 crc kubenswrapper[4881]: E0126 12:40:08.067273 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 12:40:08 crc kubenswrapper[4881]: E0126 12:40:08.067439 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-76dzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pvt2d_openshift-marketplace(eebbd99a-494e-4431-91b4-92272880b04b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:40:08 crc kubenswrapper[4881]: E0126 12:40:08.068681 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pvt2d" podUID="eebbd99a-494e-4431-91b4-92272880b04b" Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.074961 4881 scope.go:117] "RemoveContainer" containerID="333564f8ae0c0b64677131978cc823ddb16c8e082155f6f39ae39d58db0a8e37" Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.091312 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" path="/var/lib/kubelet/pods/6b1e10a4-4d08-4638-ac73-10c521806268/volumes" Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.101292 4881 scope.go:117] "RemoveContainer" containerID="115a83c77b8be44eb5b53df29f7c22911084b02847330d80f678606c33d2a968" Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.106392 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xl7pw"] Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.117223 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xl7pw"] Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.129450 4881 scope.go:117] "RemoveContainer" containerID="05195871b64640deca27f13c80bca0307a8134182b934aaa24397cfdbe4d9616" Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.166062 4881 scope.go:117] "RemoveContainer" containerID="a7d0953f81a6745a589b98f67a59e1ed61d9d7f5cbb5004dcb19d062193627b1" Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.201496 4881 scope.go:117] "RemoveContainer" containerID="1cd52bea0c0e3ed211d48074a6459e3f03cc2277cdf61d4a316b407d8f1007a7" Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.777507 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbt8" event={"ID":"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e","Type":"ContainerStarted","Data":"fba0257fb386aa86437e20a8617b6d222e147ae2445183dd162813c143ccb68d"} Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.781491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxw6m" event={"ID":"67ea8d33-d11e-420e-b566-8d0c2301ce94","Type":"ContainerStarted","Data":"443fd1e7d97536e4e7c7243442f873138001b1b918d9ea75104056d92d0f60e3"} Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.783820 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4z92" event={"ID":"7c91a464-e748-4f02-9aab-d89a0076cb8d","Type":"ContainerStarted","Data":"858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728"} Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.799848 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lxbt8" podStartSLOduration=3.485258843 podStartE2EDuration="2m5.799821983s" podCreationTimestamp="2026-01-26 12:38:03 +0000 UTC" firstStartedPulling="2026-01-26 12:38:05.744048042 +0000 UTC m=+158.223358068" lastFinishedPulling="2026-01-26 12:40:08.058611182 +0000 UTC m=+280.537921208" observedRunningTime="2026-01-26 12:40:08.798253821 +0000 UTC m=+281.277563847" watchObservedRunningTime="2026-01-26 12:40:08.799821983 +0000 UTC m=+281.279145690" Jan 26 12:40:08 crc kubenswrapper[4881]: I0126 12:40:08.848563 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxw6m" podStartSLOduration=3.950876643 podStartE2EDuration="2m3.848547287s" podCreationTimestamp="2026-01-26 12:38:05 +0000 UTC" firstStartedPulling="2026-01-26 12:38:07.817307294 +0000 UTC m=+160.296617320" lastFinishedPulling="2026-01-26 12:40:07.714977938 +0000 UTC m=+280.194287964" observedRunningTime="2026-01-26 12:40:08.84681594 +0000 UTC m=+281.326125966" watchObservedRunningTime="2026-01-26 12:40:08.848547287 +0000 UTC m=+281.327857313" Jan 26 12:40:09 crc kubenswrapper[4881]: I0126 12:40:09.794018 4881 generic.go:334] "Generic (PLEG): container finished" podID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerID="858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728" exitCode=0 Jan 26 12:40:09 crc kubenswrapper[4881]: I0126 12:40:09.794089 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4z92" event={"ID":"7c91a464-e748-4f02-9aab-d89a0076cb8d","Type":"ContainerDied","Data":"858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728"} Jan 26 12:40:10 crc kubenswrapper[4881]: I0126 12:40:10.091576 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" path="/var/lib/kubelet/pods/01f8afd2-ae02-4313-b007-d61725d9df50/volumes" Jan 26 12:40:10 crc kubenswrapper[4881]: I0126 12:40:10.803828 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4z92" event={"ID":"7c91a464-e748-4f02-9aab-d89a0076cb8d","Type":"ContainerStarted","Data":"7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db"} Jan 26 12:40:10 crc kubenswrapper[4881]: I0126 12:40:10.827239 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v4z92" podStartSLOduration=3.487982587 podStartE2EDuration="2m4.827215828s" podCreationTimestamp="2026-01-26 12:38:06 +0000 UTC" firstStartedPulling="2026-01-26 12:38:08.874011992 +0000 UTC m=+161.353322018" lastFinishedPulling="2026-01-26 12:40:10.213245223 +0000 UTC m=+282.692555259" observedRunningTime="2026-01-26 12:40:10.818246175 +0000 UTC m=+283.297556261" watchObservedRunningTime="2026-01-26 12:40:10.827215828 +0000 UTC m=+283.306525874" Jan 26 12:40:14 crc kubenswrapper[4881]: I0126 12:40:14.144194 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:40:14 crc kubenswrapper[4881]: I0126 12:40:14.145316 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:40:14 crc kubenswrapper[4881]: I0126 12:40:14.184486 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:40:14 crc kubenswrapper[4881]: I0126 12:40:14.684183 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ncfp9"] Jan 26 12:40:14 crc kubenswrapper[4881]: I0126 12:40:14.875755 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:40:15 crc kubenswrapper[4881]: I0126 12:40:15.932877 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:40:15 crc kubenswrapper[4881]: I0126 12:40:15.933003 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:40:15 crc kubenswrapper[4881]: I0126 12:40:15.982065 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.883855 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.913754 4881 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.914192 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf" gracePeriod=15 Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.914409 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49" gracePeriod=15 Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.914492 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d" gracePeriod=15 Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.914588 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e" gracePeriod=15 Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.914654 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66" gracePeriod=15 Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.918939 4881 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919249 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919278 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919294 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerName="registry-server" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919306 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerName="registry-server" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919320 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919332 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919346 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919357 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919370 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerName="extract-content" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919380 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerName="extract-content" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919395 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919407 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919421 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" containerName="extract-utilities" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919431 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" containerName="extract-utilities" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919443 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" containerName="extract-content" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919453 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" containerName="extract-content" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919466 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919476 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919493 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3276ee-7c06-4b98-8ffd-5ee9afa280f3" containerName="pruner" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919503 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3276ee-7c06-4b98-8ffd-5ee9afa280f3" containerName="pruner" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919544 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" containerName="registry-server" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919556 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" containerName="registry-server" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919567 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" containerName="extract-utilities" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919577 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" containerName="extract-utilities" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919588 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919599 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919623 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerName="extract-utilities" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919634 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerName="extract-utilities" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919646 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" containerName="extract-content" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919656 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" containerName="extract-content" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.919671 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" containerName="registry-server" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919680 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" containerName="registry-server" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919858 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919876 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919886 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919899 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919912 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b1e10a4-4d08-4638-ac73-10c521806268" containerName="registry-server" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919927 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3276ee-7c06-4b98-8ffd-5ee9afa280f3" containerName="pruner" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919943 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919955 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f8afd2-ae02-4313-b007-d61725d9df50" containerName="registry-server" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919967 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.919981 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bbde7b-6970-4cdb-abbb-fffd1326291d" containerName="registry-server" Jan 26 12:40:16 crc kubenswrapper[4881]: E0126 12:40:16.920119 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.920131 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.927864 4881 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.929253 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.936307 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.969028 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.997959 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.998138 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:16 crc kubenswrapper[4881]: I0126 12:40:16.998231 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099062 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099104 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099135 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099179 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099271 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099334 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099405 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099444 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099478 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099577 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.099508 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.200827 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.200993 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.201014 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.201094 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.201261 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.201402 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.201446 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.201751 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.201762 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.201818 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.222245 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.222306 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.256983 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.257823 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.258221 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.259431 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:40:17 crc kubenswrapper[4881]: W0126 12:40:17.276790 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-26cedfe16e84aafbe9e7d85f1337a4ccdff042ddd3d6f8a730a8db7a0c05fbc1 WatchSource:0}: Error finding container 26cedfe16e84aafbe9e7d85f1337a4ccdff042ddd3d6f8a730a8db7a0c05fbc1: Status 404 returned error can't find the container with id 26cedfe16e84aafbe9e7d85f1337a4ccdff042ddd3d6f8a730a8db7a0c05fbc1 Jan 26 12:40:17 crc kubenswrapper[4881]: E0126 12:40:17.280041 4881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e4851386d3ce4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 12:40:17.279417572 +0000 UTC m=+289.758727608,LastTimestamp:2026-01-26 12:40:17.279417572 +0000 UTC m=+289.758727608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.849542 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.851132 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.852122 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49" exitCode=0 Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.852162 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d" exitCode=0 Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.852174 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e" exitCode=0 Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.852187 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66" exitCode=2 Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.852253 4881 scope.go:117] "RemoveContainer" containerID="a1728a0431ccafdf8aa83b6956348a8a6a41dd94e80f65cb469ef09a86078154" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.853718 4881 generic.go:334] "Generic (PLEG): container finished" podID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" containerID="3ac3e07e02bb046dbee0d5e11cdab10a8e97e6bfa94815880c380390b2c33689" exitCode=0 Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.853762 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8","Type":"ContainerDied","Data":"3ac3e07e02bb046dbee0d5e11cdab10a8e97e6bfa94815880c380390b2c33689"} Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.854511 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.854878 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.855180 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.855434 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7"} Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.855482 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"26cedfe16e84aafbe9e7d85f1337a4ccdff042ddd3d6f8a730a8db7a0c05fbc1"} Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.856640 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.857078 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.857474 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.895028 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.895592 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.895974 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:17 crc kubenswrapper[4881]: I0126 12:40:17.896249 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:18 crc kubenswrapper[4881]: I0126 12:40:18.087409 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:18 crc kubenswrapper[4881]: I0126 12:40:18.087864 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:18 crc kubenswrapper[4881]: I0126 12:40:18.088245 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:18 crc kubenswrapper[4881]: E0126 12:40:18.789227 4881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e4851386d3ce4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 12:40:17.279417572 +0000 UTC m=+289.758727608,LastTimestamp:2026-01-26 12:40:17.279417572 +0000 UTC m=+289.758727608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 12:40:18 crc kubenswrapper[4881]: I0126 12:40:18.864677 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.166166 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.167356 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.167819 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.168035 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.228066 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kube-api-access\") pod \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.228332 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-var-lock\") pod \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.228376 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kubelet-dir\") pod \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\" (UID: \"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8\") " Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.228882 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-var-lock" (OuterVolumeSpecName: "var-lock") pod "c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" (UID: "c88d870f-ca6d-47d4-b7f3-3ef26315e3b8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.228933 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" (UID: "c88d870f-ca6d-47d4-b7f3-3ef26315e3b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.233373 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" (UID: "c88d870f-ca6d-47d4-b7f3-3ef26315e3b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.329459 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.329499 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.329513 4881 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c88d870f-ca6d-47d4-b7f3-3ef26315e3b8-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.873619 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c88d870f-ca6d-47d4-b7f3-3ef26315e3b8","Type":"ContainerDied","Data":"5521e4b139da65a31ac745c6086587eae9c0548c33227c56b5a9cea5d1785c73"} Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.873675 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5521e4b139da65a31ac745c6086587eae9c0548c33227c56b5a9cea5d1785c73" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.873742 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.892460 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.892887 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:19 crc kubenswrapper[4881]: I0126 12:40:19.893393 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:21 crc kubenswrapper[4881]: I0126 12:40:21.892660 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 12:40:21 crc kubenswrapper[4881]: I0126 12:40:21.894600 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf" exitCode=0 Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.093856 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.095437 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.096771 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.097655 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.097927 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.098197 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.168692 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.168763 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.168795 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.168818 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.168847 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.168960 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.169058 4881 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.169068 4881 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.169076 4881 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.908649 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.910002 4881 scope.go:117] "RemoveContainer" containerID="47b841bd998f47d2aa05f556a43ac6f14f630207aff7305cd0440f0b0642cc49" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.910230 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.925293 4881 scope.go:117] "RemoveContainer" containerID="3de2d8eff73741321cce91c7bfdcab04498cfdb4694e713750466bb2d8b95e5d" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.941504 4881 scope.go:117] "RemoveContainer" containerID="7cbb88ac26b9c279bef436acf1a1d1e431e752f2a1ee765a7541082fa8f6c99e" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.944313 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.944840 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.945269 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.945785 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.958716 4881 scope.go:117] "RemoveContainer" containerID="ff94d70ea9ad07ebc50c17d2a14b318e9b553bf1ded1e15429e9b318903d4d66" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.973646 4881 scope.go:117] "RemoveContainer" containerID="e461100b4aeab2a6e88c5c3aeb0802664d0d647247bc73b107be7c6bc4cfc9cf" Jan 26 12:40:22 crc kubenswrapper[4881]: I0126 12:40:22.993001 4881 scope.go:117] "RemoveContainer" containerID="f09815b7bc1e9f3afdee67bb736d35ea1c23567f631bc5fd4fcab013ab191647" Jan 26 12:40:23 crc kubenswrapper[4881]: I0126 12:40:23.082710 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:23 crc kubenswrapper[4881]: I0126 12:40:23.083080 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:23 crc kubenswrapper[4881]: I0126 12:40:23.083330 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:23 crc kubenswrapper[4881]: I0126 12:40:23.083778 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:23 crc kubenswrapper[4881]: E0126 12:40:23.084131 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pvt2d" podUID="eebbd99a-494e-4431-91b4-92272880b04b" Jan 26 12:40:23 crc kubenswrapper[4881]: I0126 12:40:23.084420 4881 status_manager.go:851] "Failed to get status for pod" podUID="eebbd99a-494e-4431-91b4-92272880b04b" pod="openshift-marketplace/redhat-operators-pvt2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvt2d\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: I0126 12:40:24.088775 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.138477 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.138989 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.139488 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.140067 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.140407 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: I0126 12:40:24.140551 4881 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.141118 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.342060 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.693432 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:40:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:40:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:40:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:40:24Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.694192 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.694900 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.695350 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.695875 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.695915 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:40:24 crc kubenswrapper[4881]: E0126 12:40:24.743767 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Jan 26 12:40:25 crc kubenswrapper[4881]: E0126 12:40:25.544548 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Jan 26 12:40:27 crc kubenswrapper[4881]: E0126 12:40:27.146712 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="3.2s" Jan 26 12:40:27 crc kubenswrapper[4881]: I0126 12:40:27.914404 4881 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 12:40:28 crc kubenswrapper[4881]: I0126 12:40:28.091623 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:28 crc kubenswrapper[4881]: I0126 12:40:28.093439 4881 status_manager.go:851] "Failed to get status for pod" podUID="eebbd99a-494e-4431-91b4-92272880b04b" pod="openshift-marketplace/redhat-operators-pvt2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvt2d\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:28 crc kubenswrapper[4881]: I0126 12:40:28.094360 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:28 crc kubenswrapper[4881]: I0126 12:40:28.095145 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:28 crc kubenswrapper[4881]: E0126 12:40:28.790265 4881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e4851386d3ce4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 12:40:17.279417572 +0000 UTC m=+289.758727608,LastTimestamp:2026-01-26 12:40:17.279417572 +0000 UTC m=+289.758727608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 12:40:30 crc kubenswrapper[4881]: E0126 12:40:30.347745 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="6.4s" Jan 26 12:40:30 crc kubenswrapper[4881]: I0126 12:40:30.867990 4881 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 12:40:30 crc kubenswrapper[4881]: I0126 12:40:30.868079 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.081719 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.082805 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.083635 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.084280 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.084655 4881 status_manager.go:851] "Failed to get status for pod" podUID="eebbd99a-494e-4431-91b4-92272880b04b" pod="openshift-marketplace/redhat-operators-pvt2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvt2d\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.109177 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.109229 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:40:31 crc kubenswrapper[4881]: E0126 12:40:31.109847 4881 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.110633 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:31 crc kubenswrapper[4881]: W0126 12:40:31.142657 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-4afd574cc362cc33469328d76ec6b2167cf8eb7de16c3f9cffcb113eabc1d4f0 WatchSource:0}: Error finding container 4afd574cc362cc33469328d76ec6b2167cf8eb7de16c3f9cffcb113eabc1d4f0: Status 404 returned error can't find the container with id 4afd574cc362cc33469328d76ec6b2167cf8eb7de16c3f9cffcb113eabc1d4f0 Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.979436 4881 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="729c314b9c47f5d04d1464f0efa46a9c69da029d720747af1f8c17df6cff3c26" exitCode=0 Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.979589 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"729c314b9c47f5d04d1464f0efa46a9c69da029d720747af1f8c17df6cff3c26"} Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.979824 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4afd574cc362cc33469328d76ec6b2167cf8eb7de16c3f9cffcb113eabc1d4f0"} Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.980083 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.980099 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:40:31 crc kubenswrapper[4881]: E0126 12:40:31.980565 4881 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.981191 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.981999 4881 status_manager.go:851] "Failed to get status for pod" podUID="eebbd99a-494e-4431-91b4-92272880b04b" pod="openshift-marketplace/redhat-operators-pvt2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvt2d\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.982355 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.982851 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.984772 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.984816 4881 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1" exitCode=1 Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.984838 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1"} Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.985148 4881 scope.go:117] "RemoveContainer" containerID="86d50d955d4fa8b487f6900f78738321e21226e13b7b12c100f0dd029de2fde1" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.985463 4881 status_manager.go:851] "Failed to get status for pod" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.985796 4881 status_manager.go:851] "Failed to get status for pod" podUID="eebbd99a-494e-4431-91b4-92272880b04b" pod="openshift-marketplace/redhat-operators-pvt2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pvt2d\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.986230 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.986529 4881 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:31 crc kubenswrapper[4881]: I0126 12:40:31.986908 4881 status_manager.go:851] "Failed to get status for pod" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" pod="openshift-marketplace/redhat-operators-v4z92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v4z92\": dial tcp 38.102.83.69:6443: connect: connection refused" Jan 26 12:40:32 crc kubenswrapper[4881]: I0126 12:40:32.125321 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:40:33 crc kubenswrapper[4881]: I0126 12:40:33.030399 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1f95f2d75c24d6fd5064d7933a9e3a535ae1c4fd2a1f0c9c518e144d63274fb3"} Jan 26 12:40:33 crc kubenswrapper[4881]: I0126 12:40:33.030778 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3c03fbc96ab1796df8f9e64279339bda438f24b9be62e1353967b5b0e0a543ac"} Jan 26 12:40:33 crc kubenswrapper[4881]: I0126 12:40:33.030798 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b4432c40a062bfdea4122cbbaea9cfdd5b234d9d4d7a9b3c391bd88f7678429"} Jan 26 12:40:33 crc kubenswrapper[4881]: I0126 12:40:33.035687 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 12:40:33 crc kubenswrapper[4881]: I0126 12:40:33.035737 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"62393932716c4c69a75b2181f28cb8775b5606008f96bde3ba019a8fc2a5cd5c"} Jan 26 12:40:33 crc kubenswrapper[4881]: I0126 12:40:33.605879 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:40:33 crc kubenswrapper[4881]: I0126 12:40:33.624227 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:40:34 crc kubenswrapper[4881]: I0126 12:40:34.052876 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f69a7b9baa79931198a26ac84880f8cbb67c70e6b8482417f7154d5b61949b38"} Jan 26 12:40:34 crc kubenswrapper[4881]: I0126 12:40:34.052898 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:40:34 crc kubenswrapper[4881]: I0126 12:40:34.052919 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:40:34 crc kubenswrapper[4881]: I0126 12:40:34.052927 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:40:34 crc kubenswrapper[4881]: I0126 12:40:34.052940 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dfe1db3d4a7479db20a2e3e397587de7f247f4dc37aa1c2826cd4f0356998c2a"} Jan 26 12:40:34 crc kubenswrapper[4881]: I0126 12:40:34.053044 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:36 crc kubenswrapper[4881]: I0126 12:40:36.110913 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:36 crc kubenswrapper[4881]: I0126 12:40:36.111272 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:36 crc kubenswrapper[4881]: I0126 12:40:36.119439 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:38 crc kubenswrapper[4881]: I0126 12:40:38.078343 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvt2d" event={"ID":"eebbd99a-494e-4431-91b4-92272880b04b","Type":"ContainerStarted","Data":"55b38a631cf7ccd621f9a8e723de976a5f11a7876397f57c8da6f9b4adc95c74"} Jan 26 12:40:39 crc kubenswrapper[4881]: I0126 12:40:39.061605 4881 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:39 crc kubenswrapper[4881]: I0126 12:40:39.085626 4881 generic.go:334] "Generic (PLEG): container finished" podID="eebbd99a-494e-4431-91b4-92272880b04b" containerID="55b38a631cf7ccd621f9a8e723de976a5f11a7876397f57c8da6f9b4adc95c74" exitCode=0 Jan 26 12:40:39 crc kubenswrapper[4881]: I0126 12:40:39.085702 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvt2d" event={"ID":"eebbd99a-494e-4431-91b4-92272880b04b","Type":"ContainerDied","Data":"55b38a631cf7ccd621f9a8e723de976a5f11a7876397f57c8da6f9b4adc95c74"} Jan 26 12:40:39 crc kubenswrapper[4881]: I0126 12:40:39.086061 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:40:39 crc kubenswrapper[4881]: I0126 12:40:39.086076 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:40:39 crc kubenswrapper[4881]: I0126 12:40:39.092604 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:40:39 crc kubenswrapper[4881]: I0126 12:40:39.134675 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="58a809b4-43b6-4a4b-b0ed-8ec0d1e6afa4" Jan 26 12:40:39 crc kubenswrapper[4881]: I0126 12:40:39.712940 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" podUID="810e7137-f09f-4050-bb0d-b15c23c57ed0" containerName="oauth-openshift" containerID="cri-o://cc08cf80492430a8bba56fe74d858c50bd26db39ba8a171a6d582ef8dfbb2a88" gracePeriod=15 Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.094289 4881 generic.go:334] "Generic (PLEG): container finished" podID="810e7137-f09f-4050-bb0d-b15c23c57ed0" containerID="cc08cf80492430a8bba56fe74d858c50bd26db39ba8a171a6d582ef8dfbb2a88" exitCode=0 Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.094389 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" event={"ID":"810e7137-f09f-4050-bb0d-b15c23c57ed0","Type":"ContainerDied","Data":"cc08cf80492430a8bba56fe74d858c50bd26db39ba8a171a6d582ef8dfbb2a88"} Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.094661 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.094678 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.098649 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="58a809b4-43b6-4a4b-b0ed-8ec0d1e6afa4" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.569361 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660483 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-provider-selection\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660563 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhnx4\" (UniqueName: \"kubernetes.io/projected/810e7137-f09f-4050-bb0d-b15c23c57ed0-kube-api-access-jhnx4\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660619 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-login\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660644 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-session\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660669 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-router-certs\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660691 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-serving-cert\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660741 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-service-ca\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660765 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-error\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660824 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-policies\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660846 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-cliconfig\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660891 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-idp-0-file-data\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660911 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-dir\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660936 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-trusted-ca-bundle\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.660966 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-ocp-branding-template\") pod \"810e7137-f09f-4050-bb0d-b15c23c57ed0\" (UID: \"810e7137-f09f-4050-bb0d-b15c23c57ed0\") " Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.662765 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.663759 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.666248 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.666760 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.666905 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.667173 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.668537 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.668786 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.669298 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.669784 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.670024 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810e7137-f09f-4050-bb0d-b15c23c57ed0-kube-api-access-jhnx4" (OuterVolumeSpecName: "kube-api-access-jhnx4") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "kube-api-access-jhnx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.670417 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.675673 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.675982 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "810e7137-f09f-4050-bb0d-b15c23c57ed0" (UID: "810e7137-f09f-4050-bb0d-b15c23c57ed0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.762783 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.762843 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.762862 4881 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.762882 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.762905 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.762921 4881 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810e7137-f09f-4050-bb0d-b15c23c57ed0-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.762937 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.762959 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.762977 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.762997 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhnx4\" (UniqueName: \"kubernetes.io/projected/810e7137-f09f-4050-bb0d-b15c23c57ed0-kube-api-access-jhnx4\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.763017 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.763034 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.763049 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:40 crc kubenswrapper[4881]: I0126 12:40:40.763069 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810e7137-f09f-4050-bb0d-b15c23c57ed0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:40:41 crc kubenswrapper[4881]: I0126 12:40:41.101289 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvt2d" event={"ID":"eebbd99a-494e-4431-91b4-92272880b04b","Type":"ContainerStarted","Data":"70e31ac7b472c258d5ebb82fe4a2b7f3b99440009aca877d6e9ae94a14c92ca3"} Jan 26 12:40:41 crc kubenswrapper[4881]: I0126 12:40:41.102938 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" event={"ID":"810e7137-f09f-4050-bb0d-b15c23c57ed0","Type":"ContainerDied","Data":"6b7577ebd1770f16fb303ca492e97aeb2307c3ce11d333fa51d1a3f27f2a3604"} Jan 26 12:40:41 crc kubenswrapper[4881]: I0126 12:40:41.102978 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ncfp9" Jan 26 12:40:41 crc kubenswrapper[4881]: I0126 12:40:41.103012 4881 scope.go:117] "RemoveContainer" containerID="cc08cf80492430a8bba56fe74d858c50bd26db39ba8a171a6d582ef8dfbb2a88" Jan 26 12:40:42 crc kubenswrapper[4881]: I0126 12:40:42.088805 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:40:47 crc kubenswrapper[4881]: I0126 12:40:47.528541 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:40:47 crc kubenswrapper[4881]: I0126 12:40:47.528812 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:40:47 crc kubenswrapper[4881]: I0126 12:40:47.585599 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:40:48 crc kubenswrapper[4881]: I0126 12:40:48.250748 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:40:48 crc kubenswrapper[4881]: I0126 12:40:48.871429 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 12:40:48 crc kubenswrapper[4881]: I0126 12:40:48.964010 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 12:40:49 crc kubenswrapper[4881]: I0126 12:40:49.274539 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 12:40:49 crc kubenswrapper[4881]: I0126 12:40:49.438173 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 12:40:49 crc kubenswrapper[4881]: I0126 12:40:49.557797 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 12:40:49 crc kubenswrapper[4881]: I0126 12:40:49.823349 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 12:40:49 crc kubenswrapper[4881]: I0126 12:40:49.994660 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 12:40:50 crc kubenswrapper[4881]: I0126 12:40:50.011142 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 12:40:50 crc kubenswrapper[4881]: I0126 12:40:50.315931 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 12:40:50 crc kubenswrapper[4881]: I0126 12:40:50.469844 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 12:40:50 crc kubenswrapper[4881]: I0126 12:40:50.512724 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 12:40:50 crc kubenswrapper[4881]: I0126 12:40:50.637196 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 12:40:50 crc kubenswrapper[4881]: I0126 12:40:50.942049 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 12:40:50 crc kubenswrapper[4881]: I0126 12:40:50.963402 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.014028 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.032663 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.251345 4881 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.313408 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.387015 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.394456 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.480916 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.562227 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.606688 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.628856 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.822356 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 12:40:51 crc kubenswrapper[4881]: I0126 12:40:51.845630 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.306296 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.329324 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.393207 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.408563 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.514736 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.518703 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.568664 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.590226 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.669688 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.737674 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.819205 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.890009 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.905719 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.922264 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 12:40:52 crc kubenswrapper[4881]: I0126 12:40:52.957679 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.025613 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.082456 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.387297 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.483258 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.510954 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.516428 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.539916 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.552953 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.586999 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.681871 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.703497 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.720864 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.799272 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.829818 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.843645 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.917013 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 12:40:53 crc kubenswrapper[4881]: I0126 12:40:53.939695 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.067021 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.200820 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.239915 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.256813 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.358771 4881 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.359884 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.439406 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.456623 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.540283 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.571312 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.572806 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.580285 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.599054 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.662357 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.780328 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.800362 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.838430 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.871711 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.888080 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.906610 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.926011 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.926816 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.944921 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 12:40:54 crc kubenswrapper[4881]: I0126 12:40:54.982626 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.016402 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.184125 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.218884 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.511143 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.549922 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.657194 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.710000 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.776006 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.782381 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.792367 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 12:40:55 crc kubenswrapper[4881]: I0126 12:40:55.825188 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 12:40:56 crc kubenswrapper[4881]: I0126 12:40:56.182614 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 12:40:56 crc kubenswrapper[4881]: I0126 12:40:56.282216 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 12:40:56 crc kubenswrapper[4881]: I0126 12:40:56.342345 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 12:40:56 crc kubenswrapper[4881]: I0126 12:40:56.521495 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 12:40:56 crc kubenswrapper[4881]: I0126 12:40:56.655261 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 12:40:56 crc kubenswrapper[4881]: I0126 12:40:56.669218 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 12:40:56 crc kubenswrapper[4881]: I0126 12:40:56.720830 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 12:40:56 crc kubenswrapper[4881]: I0126 12:40:56.822300 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 12:40:56 crc kubenswrapper[4881]: I0126 12:40:56.964060 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 12:40:56 crc kubenswrapper[4881]: I0126 12:40:56.999821 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.010509 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.019259 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.105905 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.167953 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.171376 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.223380 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.256202 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.348124 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.384476 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.433610 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.445384 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.452755 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.460578 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.466682 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.495314 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.495384 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.520609 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.581388 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.595324 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.627317 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.707157 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.838379 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.855678 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.882463 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 12:40:57 crc kubenswrapper[4881]: I0126 12:40:57.959414 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.025960 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.031036 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.183671 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.485531 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.503102 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.503612 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.511086 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.561541 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.588678 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.612577 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.637112 4881 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.640375 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.641933 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.677008 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.786481 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.805568 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.866048 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 12:40:58 crc kubenswrapper[4881]: I0126 12:40:58.897881 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.062165 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.130195 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.164279 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.228534 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.233386 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.257401 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.336209 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.347057 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.435382 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.458553 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.586222 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.598307 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.671358 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.720967 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.759618 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.799478 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.880748 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 12:40:59 crc kubenswrapper[4881]: I0126 12:40:59.943328 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.011506 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.067215 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.188581 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.297064 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.312413 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.373142 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.394709 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.487135 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.550205 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.566540 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.638033 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.692962 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.693285 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.764699 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.781936 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.835170 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.861038 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.876080 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.897055 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.939628 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 12:41:00 crc kubenswrapper[4881]: I0126 12:41:00.999192 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.024476 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.137104 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.286605 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.312030 4881 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.314379 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.314358637 podStartE2EDuration="45.314358637s" podCreationTimestamp="2026-01-26 12:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:40:39.055224003 +0000 UTC m=+311.534534029" watchObservedRunningTime="2026-01-26 12:41:01.314358637 +0000 UTC m=+333.793668683" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.316198 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pvt2d" podStartSLOduration=25.19012908 podStartE2EDuration="2m54.316186841s" podCreationTimestamp="2026-01-26 12:38:07 +0000 UTC" firstStartedPulling="2026-01-26 12:38:10.90743297 +0000 UTC m=+163.386743006" lastFinishedPulling="2026-01-26 12:40:40.033490731 +0000 UTC m=+312.512800767" observedRunningTime="2026-01-26 12:40:41.128472624 +0000 UTC m=+313.607782660" watchObservedRunningTime="2026-01-26 12:41:01.316186841 +0000 UTC m=+333.795496867" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.317583 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-ncfp9"] Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.317643 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-5db757fd5b-8bj6j"] Jan 26 12:41:01 crc kubenswrapper[4881]: E0126 12:41:01.317876 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810e7137-f09f-4050-bb0d-b15c23c57ed0" containerName="oauth-openshift" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.317895 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="810e7137-f09f-4050-bb0d-b15c23c57ed0" containerName="oauth-openshift" Jan 26 12:41:01 crc kubenswrapper[4881]: E0126 12:41:01.317917 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" containerName="installer" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.317926 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" containerName="installer" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.318121 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="810e7137-f09f-4050-bb0d-b15c23c57ed0" containerName="oauth-openshift" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.318152 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c88d870f-ca6d-47d4-b7f3-3ef26315e3b8" containerName="installer" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.318111 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.318338 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9aa1877-c239-4157-938d-e5c85ff3e76a" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.319017 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.322744 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.324553 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.325469 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.325802 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.325815 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.325948 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.326109 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.325998 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.326415 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.326579 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.327631 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.327830 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.328628 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.335772 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.339347 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.344729 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.346909 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.347499 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.377481 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.377452594 podStartE2EDuration="22.377452594s" podCreationTimestamp="2026-01-26 12:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:41:01.351338937 +0000 UTC m=+333.830648973" watchObservedRunningTime="2026-01-26 12:41:01.377452594 +0000 UTC m=+333.856762640" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.380593 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.465491 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.465775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.465806 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.465828 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-audit-policies\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.465967 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-error\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.466075 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-login\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.466146 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.466218 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-session\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.466301 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f4dede-962e-4db4-9548-05c36728f2f4-audit-dir\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.466342 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.466371 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.466403 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.466443 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcf7x\" (UniqueName: \"kubernetes.io/projected/58f4dede-962e-4db4-9548-05c36728f2f4-kube-api-access-lcf7x\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.466478 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.490473 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcf7x\" (UniqueName: \"kubernetes.io/projected/58f4dede-962e-4db4-9548-05c36728f2f4-kube-api-access-lcf7x\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567241 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567283 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567314 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567341 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-audit-policies\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567392 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-error\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567442 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-login\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567504 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-session\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567550 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f4dede-962e-4db4-9548-05c36728f2f4-audit-dir\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567581 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567605 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.567628 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.569381 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.569511 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f4dede-962e-4db4-9548-05c36728f2f4-audit-dir\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.570166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.570555 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-audit-policies\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.571153 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.573892 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-login\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.574164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.574206 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-error\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.574624 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.575550 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.576041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.576355 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.577348 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-session\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.597901 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcf7x\" (UniqueName: \"kubernetes.io/projected/58f4dede-962e-4db4-9548-05c36728f2f4-kube-api-access-lcf7x\") pod \"oauth-openshift-5db757fd5b-8bj6j\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.629905 4881 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.630409 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7" gracePeriod=5 Jan 26 12:41:01 crc kubenswrapper[4881]: I0126 12:41:01.644761 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.053306 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.090142 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810e7137-f09f-4050-bb0d-b15c23c57ed0" path="/var/lib/kubelet/pods/810e7137-f09f-4050-bb0d-b15c23c57ed0/volumes" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.108987 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5db757fd5b-8bj6j"] Jan 26 12:41:02 crc kubenswrapper[4881]: W0126 12:41:02.113726 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58f4dede_962e_4db4_9548_05c36728f2f4.slice/crio-c4adab774eb529e54a6784bd5700ef472270a43229da309bd4c5a73419de4c9f WatchSource:0}: Error finding container c4adab774eb529e54a6784bd5700ef472270a43229da309bd4c5a73419de4c9f: Status 404 returned error can't find the container with id c4adab774eb529e54a6784bd5700ef472270a43229da309bd4c5a73419de4c9f Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.148314 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.182639 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.229396 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.281342 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" event={"ID":"58f4dede-962e-4db4-9548-05c36728f2f4","Type":"ContainerStarted","Data":"c4adab774eb529e54a6784bd5700ef472270a43229da309bd4c5a73419de4c9f"} Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.459832 4881 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.496050 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.508509 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.523499 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.546143 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.576334 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.600667 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.704442 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.735884 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.750642 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 12:41:02 crc kubenswrapper[4881]: I0126 12:41:02.761896 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.008167 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.008221 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.012319 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.193502 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.264910 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.361431 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.546922 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.547094 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.548020 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.589577 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.605895 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.624633 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.668912 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.705908 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.859775 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.860462 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 12:41:03 crc kubenswrapper[4881]: I0126 12:41:03.975779 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 12:41:04 crc kubenswrapper[4881]: I0126 12:41:04.105556 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 12:41:04 crc kubenswrapper[4881]: I0126 12:41:04.139138 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 12:41:04 crc kubenswrapper[4881]: I0126 12:41:04.186067 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 12:41:04 crc kubenswrapper[4881]: I0126 12:41:04.195710 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 12:41:04 crc kubenswrapper[4881]: I0126 12:41:04.292583 4881 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 12:41:04 crc kubenswrapper[4881]: I0126 12:41:04.386001 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 12:41:04 crc kubenswrapper[4881]: I0126 12:41:04.746982 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 12:41:04 crc kubenswrapper[4881]: I0126 12:41:04.824788 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 12:41:05 crc kubenswrapper[4881]: I0126 12:41:05.300364 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" event={"ID":"58f4dede-962e-4db4-9548-05c36728f2f4","Type":"ContainerStarted","Data":"69c029003097f1f463d69b4b3d2e2401d5f4f195fd420e0f3143f26799c81eb2"} Jan 26 12:41:05 crc kubenswrapper[4881]: I0126 12:41:05.478216 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 12:41:05 crc kubenswrapper[4881]: I0126 12:41:05.775801 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 12:41:06 crc kubenswrapper[4881]: I0126 12:41:06.307220 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:06 crc kubenswrapper[4881]: I0126 12:41:06.316067 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:41:06 crc kubenswrapper[4881]: I0126 12:41:06.368741 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" podStartSLOduration=52.368722049 podStartE2EDuration="52.368722049s" podCreationTimestamp="2026-01-26 12:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:41:06.340987174 +0000 UTC m=+338.820297220" watchObservedRunningTime="2026-01-26 12:41:06.368722049 +0000 UTC m=+338.848032085" Jan 26 12:41:06 crc kubenswrapper[4881]: I0126 12:41:06.441442 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.314001 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.314609 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.314317 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.314793 4881 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7" exitCode=137 Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.315118 4881 scope.go:117] "RemoveContainer" containerID="6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.331854 4881 scope.go:117] "RemoveContainer" containerID="6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7" Jan 26 12:41:07 crc kubenswrapper[4881]: E0126 12:41:07.332404 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7\": container with ID starting with 6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7 not found: ID does not exist" containerID="6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.332440 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7"} err="failed to get container status \"6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7\": rpc error: code = NotFound desc = could not find container \"6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7\": container with ID starting with 6b78f2df9368a52e6a52297d15b445b36cdbc65bb00dac055b19abc9ded506c7 not found: ID does not exist" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.476176 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.476731 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.476853 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.476919 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.476981 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.477180 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.477241 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.477495 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.477437 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.478561 4881 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.479577 4881 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.480664 4881 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.480762 4881 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.488328 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:41:07 crc kubenswrapper[4881]: I0126 12:41:07.582595 4881 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:08 crc kubenswrapper[4881]: I0126 12:41:08.091035 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 12:41:08 crc kubenswrapper[4881]: I0126 12:41:08.091328 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 12:41:08 crc kubenswrapper[4881]: I0126 12:41:08.101840 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 12:41:08 crc kubenswrapper[4881]: I0126 12:41:08.101897 4881 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1bf8d706-37a1-4aba-8753-216f5817eacb" Jan 26 12:41:08 crc kubenswrapper[4881]: I0126 12:41:08.105857 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 12:41:08 crc kubenswrapper[4881]: I0126 12:41:08.105878 4881 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1bf8d706-37a1-4aba-8753-216f5817eacb" Jan 26 12:41:08 crc kubenswrapper[4881]: I0126 12:41:08.334492 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:41:14 crc kubenswrapper[4881]: I0126 12:41:14.832606 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 12:41:25 crc kubenswrapper[4881]: I0126 12:41:25.275887 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvt2d"] Jan 26 12:41:25 crc kubenswrapper[4881]: I0126 12:41:25.276772 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pvt2d" podUID="eebbd99a-494e-4431-91b4-92272880b04b" containerName="registry-server" containerID="cri-o://70e31ac7b472c258d5ebb82fe4a2b7f3b99440009aca877d6e9ae94a14c92ca3" gracePeriod=2 Jan 26 12:41:26 crc kubenswrapper[4881]: I0126 12:41:26.449934 4881 generic.go:334] "Generic (PLEG): container finished" podID="eebbd99a-494e-4431-91b4-92272880b04b" containerID="70e31ac7b472c258d5ebb82fe4a2b7f3b99440009aca877d6e9ae94a14c92ca3" exitCode=0 Jan 26 12:41:26 crc kubenswrapper[4881]: I0126 12:41:26.450149 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvt2d" event={"ID":"eebbd99a-494e-4431-91b4-92272880b04b","Type":"ContainerDied","Data":"70e31ac7b472c258d5ebb82fe4a2b7f3b99440009aca877d6e9ae94a14c92ca3"} Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.173419 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.243029 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-catalog-content\") pod \"eebbd99a-494e-4431-91b4-92272880b04b\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.243102 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-utilities\") pod \"eebbd99a-494e-4431-91b4-92272880b04b\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.243126 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76dzm\" (UniqueName: \"kubernetes.io/projected/eebbd99a-494e-4431-91b4-92272880b04b-kube-api-access-76dzm\") pod \"eebbd99a-494e-4431-91b4-92272880b04b\" (UID: \"eebbd99a-494e-4431-91b4-92272880b04b\") " Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.244774 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-utilities" (OuterVolumeSpecName: "utilities") pod "eebbd99a-494e-4431-91b4-92272880b04b" (UID: "eebbd99a-494e-4431-91b4-92272880b04b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.251536 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eebbd99a-494e-4431-91b4-92272880b04b-kube-api-access-76dzm" (OuterVolumeSpecName: "kube-api-access-76dzm") pod "eebbd99a-494e-4431-91b4-92272880b04b" (UID: "eebbd99a-494e-4431-91b4-92272880b04b"). InnerVolumeSpecName "kube-api-access-76dzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.344484 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.344533 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76dzm\" (UniqueName: \"kubernetes.io/projected/eebbd99a-494e-4431-91b4-92272880b04b-kube-api-access-76dzm\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.362559 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eebbd99a-494e-4431-91b4-92272880b04b" (UID: "eebbd99a-494e-4431-91b4-92272880b04b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.446120 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebbd99a-494e-4431-91b4-92272880b04b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.458578 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvt2d" event={"ID":"eebbd99a-494e-4431-91b4-92272880b04b","Type":"ContainerDied","Data":"5b03b2a2aafc6723e5cf50713636e70a3076121ae3732f671ec7a27f23de1bee"} Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.458665 4881 scope.go:117] "RemoveContainer" containerID="70e31ac7b472c258d5ebb82fe4a2b7f3b99440009aca877d6e9ae94a14c92ca3" Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.458686 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvt2d" Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.481115 4881 scope.go:117] "RemoveContainer" containerID="55b38a631cf7ccd621f9a8e723de976a5f11a7876397f57c8da6f9b4adc95c74" Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.495951 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvt2d"] Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.510434 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pvt2d"] Jan 26 12:41:27 crc kubenswrapper[4881]: I0126 12:41:27.513353 4881 scope.go:117] "RemoveContainer" containerID="7ce00d70e9daaf30f20444e46105a0eb34dedfa13a272b24a55d9107bfa492c7" Jan 26 12:41:28 crc kubenswrapper[4881]: I0126 12:41:28.090043 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eebbd99a-494e-4431-91b4-92272880b04b" path="/var/lib/kubelet/pods/eebbd99a-494e-4431-91b4-92272880b04b/volumes" Jan 26 12:41:29 crc kubenswrapper[4881]: I0126 12:41:29.355283 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lnbpt"] Jan 26 12:41:29 crc kubenswrapper[4881]: I0126 12:41:29.355574 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" podUID="7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" containerName="controller-manager" containerID="cri-o://9d22555f261365c3254aaaee7b65c47880485fc7a5368e1c8a3ebb019845e205" gracePeriod=30 Jan 26 12:41:29 crc kubenswrapper[4881]: I0126 12:41:29.449635 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6"] Jan 26 12:41:29 crc kubenswrapper[4881]: I0126 12:41:29.449865 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" podUID="6cbdefcc-18eb-4de2-a642-466fb488712f" containerName="route-controller-manager" containerID="cri-o://0bd7b41902cef1ceb7762cc5b5047bafde653c4ebb721990f204dc39579d16a0" gracePeriod=30 Jan 26 12:41:31 crc kubenswrapper[4881]: I0126 12:41:31.488359 4881 generic.go:334] "Generic (PLEG): container finished" podID="7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" containerID="9d22555f261365c3254aaaee7b65c47880485fc7a5368e1c8a3ebb019845e205" exitCode=0 Jan 26 12:41:31 crc kubenswrapper[4881]: I0126 12:41:31.488404 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" event={"ID":"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8","Type":"ContainerDied","Data":"9d22555f261365c3254aaaee7b65c47880485fc7a5368e1c8a3ebb019845e205"} Jan 26 12:41:31 crc kubenswrapper[4881]: I0126 12:41:31.992672 4881 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lnbpt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 26 12:41:31 crc kubenswrapper[4881]: I0126 12:41:31.992755 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" podUID="7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 26 12:41:32 crc kubenswrapper[4881]: I0126 12:41:32.027792 4881 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xvwh6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 12:41:32 crc kubenswrapper[4881]: I0126 12:41:32.027896 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" podUID="6cbdefcc-18eb-4de2-a642-466fb488712f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 12:41:32 crc kubenswrapper[4881]: I0126 12:41:32.495193 4881 generic.go:334] "Generic (PLEG): container finished" podID="6cbdefcc-18eb-4de2-a642-466fb488712f" containerID="0bd7b41902cef1ceb7762cc5b5047bafde653c4ebb721990f204dc39579d16a0" exitCode=0 Jan 26 12:41:32 crc kubenswrapper[4881]: I0126 12:41:32.495247 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" event={"ID":"6cbdefcc-18eb-4de2-a642-466fb488712f","Type":"ContainerDied","Data":"0bd7b41902cef1ceb7762cc5b5047bafde653c4ebb721990f204dc39579d16a0"} Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.692847 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.701626 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.716876 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c9b497d69-74tvm"] Jan 26 12:41:33 crc kubenswrapper[4881]: E0126 12:41:33.717075 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717085 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 12:41:33 crc kubenswrapper[4881]: E0126 12:41:33.717100 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebbd99a-494e-4431-91b4-92272880b04b" containerName="registry-server" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717106 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebbd99a-494e-4431-91b4-92272880b04b" containerName="registry-server" Jan 26 12:41:33 crc kubenswrapper[4881]: E0126 12:41:33.717116 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebbd99a-494e-4431-91b4-92272880b04b" containerName="extract-utilities" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717122 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebbd99a-494e-4431-91b4-92272880b04b" containerName="extract-utilities" Jan 26 12:41:33 crc kubenswrapper[4881]: E0126 12:41:33.717133 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbdefcc-18eb-4de2-a642-466fb488712f" containerName="route-controller-manager" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717139 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbdefcc-18eb-4de2-a642-466fb488712f" containerName="route-controller-manager" Jan 26 12:41:33 crc kubenswrapper[4881]: E0126 12:41:33.717152 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebbd99a-494e-4431-91b4-92272880b04b" containerName="extract-content" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717158 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebbd99a-494e-4431-91b4-92272880b04b" containerName="extract-content" Jan 26 12:41:33 crc kubenswrapper[4881]: E0126 12:41:33.717164 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" containerName="controller-manager" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717170 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" containerName="controller-manager" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717262 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebbd99a-494e-4431-91b4-92272880b04b" containerName="registry-server" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717273 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717287 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" containerName="controller-manager" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717297 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cbdefcc-18eb-4de2-a642-466fb488712f" containerName="route-controller-manager" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.717666 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.733363 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cbdefcc-18eb-4de2-a642-466fb488712f-serving-cert\") pod \"6cbdefcc-18eb-4de2-a642-466fb488712f\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.733405 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-serving-cert\") pod \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.733440 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-client-ca\") pod \"6cbdefcc-18eb-4de2-a642-466fb488712f\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.733462 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpwps\" (UniqueName: \"kubernetes.io/projected/6cbdefcc-18eb-4de2-a642-466fb488712f-kube-api-access-dpwps\") pod \"6cbdefcc-18eb-4de2-a642-466fb488712f\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.733483 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-config\") pod \"6cbdefcc-18eb-4de2-a642-466fb488712f\" (UID: \"6cbdefcc-18eb-4de2-a642-466fb488712f\") " Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.733531 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-client-ca\") pod \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.733558 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-proxy-ca-bundles\") pod \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.733579 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj47q\" (UniqueName: \"kubernetes.io/projected/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-kube-api-access-mj47q\") pod \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.733603 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-config\") pod \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\" (UID: \"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8\") " Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.736481 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-config" (OuterVolumeSpecName: "config") pod "7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" (UID: "7c6a2377-64ec-4bf1-96a5-89faa8ce01f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.738508 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-config" (OuterVolumeSpecName: "config") pod "6cbdefcc-18eb-4de2-a642-466fb488712f" (UID: "6cbdefcc-18eb-4de2-a642-466fb488712f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.739031 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-client-ca" (OuterVolumeSpecName: "client-ca") pod "7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" (UID: "7c6a2377-64ec-4bf1-96a5-89faa8ce01f8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.739567 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" (UID: "7c6a2377-64ec-4bf1-96a5-89faa8ce01f8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.741588 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-client-ca" (OuterVolumeSpecName: "client-ca") pod "6cbdefcc-18eb-4de2-a642-466fb488712f" (UID: "6cbdefcc-18eb-4de2-a642-466fb488712f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.745819 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-kube-api-access-mj47q" (OuterVolumeSpecName: "kube-api-access-mj47q") pod "7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" (UID: "7c6a2377-64ec-4bf1-96a5-89faa8ce01f8"). InnerVolumeSpecName "kube-api-access-mj47q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.746653 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" (UID: "7c6a2377-64ec-4bf1-96a5-89faa8ce01f8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.746748 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbdefcc-18eb-4de2-a642-466fb488712f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6cbdefcc-18eb-4de2-a642-466fb488712f" (UID: "6cbdefcc-18eb-4de2-a642-466fb488712f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.747209 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c9b497d69-74tvm"] Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.753809 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbdefcc-18eb-4de2-a642-466fb488712f-kube-api-access-dpwps" (OuterVolumeSpecName: "kube-api-access-dpwps") pod "6cbdefcc-18eb-4de2-a642-466fb488712f" (UID: "6cbdefcc-18eb-4de2-a642-466fb488712f"). InnerVolumeSpecName "kube-api-access-dpwps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835206 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074245a-8ee8-43e5-a6ac-42865c01f8da-serving-cert\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835271 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-proxy-ca-bundles\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835308 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-client-ca\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835335 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-config\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835367 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf9jc\" (UniqueName: \"kubernetes.io/projected/c074245a-8ee8-43e5-a6ac-42865c01f8da-kube-api-access-gf9jc\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835415 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835427 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cbdefcc-18eb-4de2-a642-466fb488712f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835438 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835446 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835456 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpwps\" (UniqueName: \"kubernetes.io/projected/6cbdefcc-18eb-4de2-a642-466fb488712f-kube-api-access-dpwps\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835466 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbdefcc-18eb-4de2-a642-466fb488712f-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835474 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835483 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.835492 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj47q\" (UniqueName: \"kubernetes.io/projected/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8-kube-api-access-mj47q\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.937050 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-config\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.937568 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf9jc\" (UniqueName: \"kubernetes.io/projected/c074245a-8ee8-43e5-a6ac-42865c01f8da-kube-api-access-gf9jc\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.937809 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074245a-8ee8-43e5-a6ac-42865c01f8da-serving-cert\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.938125 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-proxy-ca-bundles\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.938342 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-client-ca\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.939406 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-client-ca\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.939686 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-proxy-ca-bundles\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.941193 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074245a-8ee8-43e5-a6ac-42865c01f8da-serving-cert\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.941531 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-config\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:33 crc kubenswrapper[4881]: I0126 12:41:33.957547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf9jc\" (UniqueName: \"kubernetes.io/projected/c074245a-8ee8-43e5-a6ac-42865c01f8da-kube-api-access-gf9jc\") pod \"controller-manager-c9b497d69-74tvm\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.046836 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.295676 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c9b497d69-74tvm"] Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.517744 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.517922 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6" event={"ID":"6cbdefcc-18eb-4de2-a642-466fb488712f","Type":"ContainerDied","Data":"bace95764aff918136174e2489cd0cb0073eed5f80c23b9611c632f34b0ceb9e"} Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.518630 4881 scope.go:117] "RemoveContainer" containerID="0bd7b41902cef1ceb7762cc5b5047bafde653c4ebb721990f204dc39579d16a0" Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.522404 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" event={"ID":"c074245a-8ee8-43e5-a6ac-42865c01f8da","Type":"ContainerStarted","Data":"f988629a2904d68eeabb41daa415831ba39cdfc97ac4937452b0a95d5eb82ecf"} Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.527371 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" event={"ID":"7c6a2377-64ec-4bf1-96a5-89faa8ce01f8","Type":"ContainerDied","Data":"5081f145beb9008d7e2131ae87b735fca910ab7dc2aba702dfcac53aa448d919"} Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.527485 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lnbpt" Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.544348 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6"] Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.551377 4881 scope.go:117] "RemoveContainer" containerID="9d22555f261365c3254aaaee7b65c47880485fc7a5368e1c8a3ebb019845e205" Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.555502 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvwh6"] Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.564209 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lnbpt"] Jan 26 12:41:34 crc kubenswrapper[4881]: I0126 12:41:34.569605 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lnbpt"] Jan 26 12:41:35 crc kubenswrapper[4881]: I0126 12:41:35.536174 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" event={"ID":"c074245a-8ee8-43e5-a6ac-42865c01f8da","Type":"ContainerStarted","Data":"6b92413138e562292e90616074cdb806eec835bd108cd92c5c0030516d9d6c7f"} Jan 26 12:41:35 crc kubenswrapper[4881]: I0126 12:41:35.536450 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:35 crc kubenswrapper[4881]: I0126 12:41:35.540474 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:41:35 crc kubenswrapper[4881]: I0126 12:41:35.553559 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" podStartSLOduration=5.553541825 podStartE2EDuration="5.553541825s" podCreationTimestamp="2026-01-26 12:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:41:35.552362106 +0000 UTC m=+368.031672132" watchObservedRunningTime="2026-01-26 12:41:35.553541825 +0000 UTC m=+368.032851851" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.093788 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cbdefcc-18eb-4de2-a642-466fb488712f" path="/var/lib/kubelet/pods/6cbdefcc-18eb-4de2-a642-466fb488712f/volumes" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.095582 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6a2377-64ec-4bf1-96a5-89faa8ce01f8" path="/var/lib/kubelet/pods/7c6a2377-64ec-4bf1-96a5-89faa8ce01f8/volumes" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.107167 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs"] Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.108831 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.113580 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.113823 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.114055 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.114068 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.123887 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.129409 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.145469 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs"] Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.170745 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-client-ca\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.170815 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtg25\" (UniqueName: \"kubernetes.io/projected/f03ad41a-5524-406a-938b-a54266ebd7a0-kube-api-access-xtg25\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.170848 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03ad41a-5524-406a-938b-a54266ebd7a0-serving-cert\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.171042 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-config\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.273643 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtg25\" (UniqueName: \"kubernetes.io/projected/f03ad41a-5524-406a-938b-a54266ebd7a0-kube-api-access-xtg25\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.273941 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03ad41a-5524-406a-938b-a54266ebd7a0-serving-cert\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.274067 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-config\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.274149 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-client-ca\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.275229 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-client-ca\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.275482 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-config\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.281377 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03ad41a-5524-406a-938b-a54266ebd7a0-serving-cert\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.295114 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtg25\" (UniqueName: \"kubernetes.io/projected/f03ad41a-5524-406a-938b-a54266ebd7a0-kube-api-access-xtg25\") pod \"route-controller-manager-dfb766cdf-7n5cs\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.460937 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:36 crc kubenswrapper[4881]: I0126 12:41:36.953608 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs"] Jan 26 12:41:36 crc kubenswrapper[4881]: W0126 12:41:36.956769 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf03ad41a_5524_406a_938b_a54266ebd7a0.slice/crio-35c025e1d5decabf020eb331484420a3f9a0bb0c1c249a05a7528a26821218c9 WatchSource:0}: Error finding container 35c025e1d5decabf020eb331484420a3f9a0bb0c1c249a05a7528a26821218c9: Status 404 returned error can't find the container with id 35c025e1d5decabf020eb331484420a3f9a0bb0c1c249a05a7528a26821218c9 Jan 26 12:41:37 crc kubenswrapper[4881]: I0126 12:41:37.561938 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" event={"ID":"f03ad41a-5524-406a-938b-a54266ebd7a0","Type":"ContainerStarted","Data":"35c025e1d5decabf020eb331484420a3f9a0bb0c1c249a05a7528a26821218c9"} Jan 26 12:41:38 crc kubenswrapper[4881]: I0126 12:41:38.568028 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" event={"ID":"f03ad41a-5524-406a-938b-a54266ebd7a0","Type":"ContainerStarted","Data":"f204cf1c0debf2cfd5e70bef92895c772d77ad445998aaefe712318998dec6bf"} Jan 26 12:41:38 crc kubenswrapper[4881]: I0126 12:41:38.568244 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:38 crc kubenswrapper[4881]: I0126 12:41:38.574274 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:41:38 crc kubenswrapper[4881]: I0126 12:41:38.594042 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" podStartSLOduration=8.594025033 podStartE2EDuration="8.594025033s" podCreationTimestamp="2026-01-26 12:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:41:38.59267545 +0000 UTC m=+371.071985476" watchObservedRunningTime="2026-01-26 12:41:38.594025033 +0000 UTC m=+371.073335059" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.065115 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jqg2h"] Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.066505 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.089714 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jqg2h"] Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.175924 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/99727628-c04d-4a7c-ad21-6797c746254e-registry-tls\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.175976 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99727628-c04d-4a7c-ad21-6797c746254e-trusted-ca\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.176013 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slgxt\" (UniqueName: \"kubernetes.io/projected/99727628-c04d-4a7c-ad21-6797c746254e-kube-api-access-slgxt\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.176149 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/99727628-c04d-4a7c-ad21-6797c746254e-registry-certificates\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.176214 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/99727628-c04d-4a7c-ad21-6797c746254e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.176251 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99727628-c04d-4a7c-ad21-6797c746254e-bound-sa-token\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.176311 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/99727628-c04d-4a7c-ad21-6797c746254e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.176344 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.219769 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.277636 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/99727628-c04d-4a7c-ad21-6797c746254e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.277720 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99727628-c04d-4a7c-ad21-6797c746254e-bound-sa-token\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.277781 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/99727628-c04d-4a7c-ad21-6797c746254e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.277844 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/99727628-c04d-4a7c-ad21-6797c746254e-registry-tls\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.277872 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99727628-c04d-4a7c-ad21-6797c746254e-trusted-ca\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.277924 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slgxt\" (UniqueName: \"kubernetes.io/projected/99727628-c04d-4a7c-ad21-6797c746254e-kube-api-access-slgxt\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.277989 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/99727628-c04d-4a7c-ad21-6797c746254e-registry-certificates\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.279855 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/99727628-c04d-4a7c-ad21-6797c746254e-registry-certificates\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.280211 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/99727628-c04d-4a7c-ad21-6797c746254e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.281231 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99727628-c04d-4a7c-ad21-6797c746254e-trusted-ca\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.287175 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/99727628-c04d-4a7c-ad21-6797c746254e-registry-tls\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.295261 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/99727628-c04d-4a7c-ad21-6797c746254e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.304300 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slgxt\" (UniqueName: \"kubernetes.io/projected/99727628-c04d-4a7c-ad21-6797c746254e-kube-api-access-slgxt\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.313958 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99727628-c04d-4a7c-ad21-6797c746254e-bound-sa-token\") pod \"image-registry-66df7c8f76-jqg2h\" (UID: \"99727628-c04d-4a7c-ad21-6797c746254e\") " pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.393112 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:51 crc kubenswrapper[4881]: I0126 12:41:51.825649 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jqg2h"] Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.449455 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qqnhh"] Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.450180 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qqnhh" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerName="registry-server" containerID="cri-o://797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030" gracePeriod=30 Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.465655 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxbt8"] Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.466128 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lxbt8" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerName="registry-server" containerID="cri-o://fba0257fb386aa86437e20a8617b6d222e147ae2445183dd162813c143ccb68d" gracePeriod=30 Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.473612 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drv9q"] Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.473803 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" podUID="7157505d-d18a-42a4-8037-96ad9a7825ce" containerName="marketplace-operator" containerID="cri-o://02ecd3da27e858bf8b74970c57d8876aca3a0ab2dd7be59babeaeafbe580b60e" gracePeriod=30 Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.484925 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxw6m"] Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.485286 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxw6m" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerName="registry-server" containerID="cri-o://443fd1e7d97536e4e7c7243442f873138001b1b918d9ea75104056d92d0f60e3" gracePeriod=30 Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.492101 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v4z92"] Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.492587 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v4z92" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerName="registry-server" containerID="cri-o://7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db" gracePeriod=30 Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.498670 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ghn75"] Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.499827 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.506693 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ghn75"] Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.602202 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3ce8c88-e7f5-461d-ad61-e035c0ca7631-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ghn75\" (UID: \"c3ce8c88-e7f5-461d-ad61-e035c0ca7631\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.602247 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ce8c88-e7f5-461d-ad61-e035c0ca7631-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ghn75\" (UID: \"c3ce8c88-e7f5-461d-ad61-e035c0ca7631\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.602275 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbptf\" (UniqueName: \"kubernetes.io/projected/c3ce8c88-e7f5-461d-ad61-e035c0ca7631-kube-api-access-wbptf\") pod \"marketplace-operator-79b997595-ghn75\" (UID: \"c3ce8c88-e7f5-461d-ad61-e035c0ca7631\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.653967 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" event={"ID":"99727628-c04d-4a7c-ad21-6797c746254e","Type":"ContainerStarted","Data":"b44d776454e8f2cc23865fb0d02bb858d211a5fac4b6f9c0134b6e74923cb029"} Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.654013 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" event={"ID":"99727628-c04d-4a7c-ad21-6797c746254e","Type":"ContainerStarted","Data":"7c3a71ce564784cd497a0e1900a37e726b6f29933e17f080b26b08b408eab4f4"} Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.703987 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3ce8c88-e7f5-461d-ad61-e035c0ca7631-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ghn75\" (UID: \"c3ce8c88-e7f5-461d-ad61-e035c0ca7631\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.704041 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ce8c88-e7f5-461d-ad61-e035c0ca7631-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ghn75\" (UID: \"c3ce8c88-e7f5-461d-ad61-e035c0ca7631\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.704064 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbptf\" (UniqueName: \"kubernetes.io/projected/c3ce8c88-e7f5-461d-ad61-e035c0ca7631-kube-api-access-wbptf\") pod \"marketplace-operator-79b997595-ghn75\" (UID: \"c3ce8c88-e7f5-461d-ad61-e035c0ca7631\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.706063 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ce8c88-e7f5-461d-ad61-e035c0ca7631-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ghn75\" (UID: \"c3ce8c88-e7f5-461d-ad61-e035c0ca7631\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.715106 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3ce8c88-e7f5-461d-ad61-e035c0ca7631-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ghn75\" (UID: \"c3ce8c88-e7f5-461d-ad61-e035c0ca7631\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.720948 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbptf\" (UniqueName: \"kubernetes.io/projected/c3ce8c88-e7f5-461d-ad61-e035c0ca7631-kube-api-access-wbptf\") pod \"marketplace-operator-79b997595-ghn75\" (UID: \"c3ce8c88-e7f5-461d-ad61-e035c0ca7631\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:52 crc kubenswrapper[4881]: I0126 12:41:52.821791 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.261016 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ghn75"] Jan 26 12:41:53 crc kubenswrapper[4881]: W0126 12:41:53.304919 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3ce8c88_e7f5_461d_ad61_e035c0ca7631.slice/crio-e2fca0465b9418b9353fa5f791a80992169cc0c21e74dd162c2692fa5303d7fc WatchSource:0}: Error finding container e2fca0465b9418b9353fa5f791a80992169cc0c21e74dd162c2692fa5303d7fc: Status 404 returned error can't find the container with id e2fca0465b9418b9353fa5f791a80992169cc0c21e74dd162c2692fa5303d7fc Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.443708 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.512553 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-utilities\") pod \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.512603 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njvsv\" (UniqueName: \"kubernetes.io/projected/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-kube-api-access-njvsv\") pod \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.512647 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-catalog-content\") pod \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\" (UID: \"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.514583 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-utilities" (OuterVolumeSpecName: "utilities") pod "85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" (UID: "85dcb696-76f6-47f5-aaef-12b0ebc2d8c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.516295 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.525792 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-kube-api-access-njvsv" (OuterVolumeSpecName: "kube-api-access-njvsv") pod "85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" (UID: "85dcb696-76f6-47f5-aaef-12b0ebc2d8c1"). InnerVolumeSpecName "kube-api-access-njvsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.590650 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" (UID: "85dcb696-76f6-47f5-aaef-12b0ebc2d8c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.594186 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.617031 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn88g\" (UniqueName: \"kubernetes.io/projected/7c91a464-e748-4f02-9aab-d89a0076cb8d-kube-api-access-cn88g\") pod \"7c91a464-e748-4f02-9aab-d89a0076cb8d\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.617089 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-utilities\") pod \"7c91a464-e748-4f02-9aab-d89a0076cb8d\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.617122 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-catalog-content\") pod \"7c91a464-e748-4f02-9aab-d89a0076cb8d\" (UID: \"7c91a464-e748-4f02-9aab-d89a0076cb8d\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.617325 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njvsv\" (UniqueName: \"kubernetes.io/projected/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-kube-api-access-njvsv\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.617341 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.622212 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c91a464-e748-4f02-9aab-d89a0076cb8d-kube-api-access-cn88g" (OuterVolumeSpecName: "kube-api-access-cn88g") pod "7c91a464-e748-4f02-9aab-d89a0076cb8d" (UID: "7c91a464-e748-4f02-9aab-d89a0076cb8d"). InnerVolumeSpecName "kube-api-access-cn88g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.623057 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-utilities" (OuterVolumeSpecName: "utilities") pod "7c91a464-e748-4f02-9aab-d89a0076cb8d" (UID: "7c91a464-e748-4f02-9aab-d89a0076cb8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.666022 4881 generic.go:334] "Generic (PLEG): container finished" podID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerID="443fd1e7d97536e4e7c7243442f873138001b1b918d9ea75104056d92d0f60e3" exitCode=0 Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.666082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxw6m" event={"ID":"67ea8d33-d11e-420e-b566-8d0c2301ce94","Type":"ContainerDied","Data":"443fd1e7d97536e4e7c7243442f873138001b1b918d9ea75104056d92d0f60e3"} Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.668042 4881 generic.go:334] "Generic (PLEG): container finished" podID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerID="7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db" exitCode=0 Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.668081 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4z92" event={"ID":"7c91a464-e748-4f02-9aab-d89a0076cb8d","Type":"ContainerDied","Data":"7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db"} Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.668097 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4z92" event={"ID":"7c91a464-e748-4f02-9aab-d89a0076cb8d","Type":"ContainerDied","Data":"0746afa2dae4e28a1c3e0283b2a9515b0d4c16d4ca0f65ba89bce39a591cee58"} Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.668113 4881 scope.go:117] "RemoveContainer" containerID="7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.668252 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4z92" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.680592 4881 generic.go:334] "Generic (PLEG): container finished" podID="7157505d-d18a-42a4-8037-96ad9a7825ce" containerID="02ecd3da27e858bf8b74970c57d8876aca3a0ab2dd7be59babeaeafbe580b60e" exitCode=0 Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.680836 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" event={"ID":"7157505d-d18a-42a4-8037-96ad9a7825ce","Type":"ContainerDied","Data":"02ecd3da27e858bf8b74970c57d8876aca3a0ab2dd7be59babeaeafbe580b60e"} Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.685409 4881 generic.go:334] "Generic (PLEG): container finished" podID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerID="797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030" exitCode=0 Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.685475 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqnhh" event={"ID":"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1","Type":"ContainerDied","Data":"797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030"} Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.685500 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqnhh" event={"ID":"85dcb696-76f6-47f5-aaef-12b0ebc2d8c1","Type":"ContainerDied","Data":"562c117ab29d18dc2139fb6a236d0ccd26b40a690b1e89e5c4be41b1489c8ffe"} Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.685616 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqnhh" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.690140 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" event={"ID":"c3ce8c88-e7f5-461d-ad61-e035c0ca7631","Type":"ContainerStarted","Data":"e2fca0465b9418b9353fa5f791a80992169cc0c21e74dd162c2692fa5303d7fc"} Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.693898 4881 generic.go:334] "Generic (PLEG): container finished" podID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerID="fba0257fb386aa86437e20a8617b6d222e147ae2445183dd162813c143ccb68d" exitCode=0 Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.694406 4881 scope.go:117] "RemoveContainer" containerID="858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.694543 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbt8" event={"ID":"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e","Type":"ContainerDied","Data":"fba0257fb386aa86437e20a8617b6d222e147ae2445183dd162813c143ccb68d"} Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.694602 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.718041 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn88g\" (UniqueName: \"kubernetes.io/projected/7c91a464-e748-4f02-9aab-d89a0076cb8d-kube-api-access-cn88g\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.718076 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.725594 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" podStartSLOduration=2.725578484 podStartE2EDuration="2.725578484s" podCreationTimestamp="2026-01-26 12:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:41:53.724896648 +0000 UTC m=+386.204206684" watchObservedRunningTime="2026-01-26 12:41:53.725578484 +0000 UTC m=+386.204888510" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.736101 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qqnhh"] Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.741259 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qqnhh"] Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.746805 4881 scope.go:117] "RemoveContainer" containerID="847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.765617 4881 scope.go:117] "RemoveContainer" containerID="7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db" Jan 26 12:41:53 crc kubenswrapper[4881]: E0126 12:41:53.766146 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db\": container with ID starting with 7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db not found: ID does not exist" containerID="7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.766198 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db"} err="failed to get container status \"7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db\": rpc error: code = NotFound desc = could not find container \"7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db\": container with ID starting with 7f89e18b34785aaa4cd9e92439bba920a78a35150750edb3d4b191eb2f32a6db not found: ID does not exist" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.766227 4881 scope.go:117] "RemoveContainer" containerID="858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728" Jan 26 12:41:53 crc kubenswrapper[4881]: E0126 12:41:53.766704 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728\": container with ID starting with 858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728 not found: ID does not exist" containerID="858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.766736 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728"} err="failed to get container status \"858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728\": rpc error: code = NotFound desc = could not find container \"858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728\": container with ID starting with 858b910057ca7579e3306268f2d7e13317388691d41de098eafe46f7a9f42728 not found: ID does not exist" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.766760 4881 scope.go:117] "RemoveContainer" containerID="847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148" Jan 26 12:41:53 crc kubenswrapper[4881]: E0126 12:41:53.766977 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148\": container with ID starting with 847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148 not found: ID does not exist" containerID="847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.767014 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148"} err="failed to get container status \"847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148\": rpc error: code = NotFound desc = could not find container \"847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148\": container with ID starting with 847c2e532ec469c50c6284d25eca2baf6ec027016fd6fb5307c6ab0a726e8148 not found: ID does not exist" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.767034 4881 scope.go:117] "RemoveContainer" containerID="797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.767298 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c91a464-e748-4f02-9aab-d89a0076cb8d" (UID: "7c91a464-e748-4f02-9aab-d89a0076cb8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.778861 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.781381 4881 scope.go:117] "RemoveContainer" containerID="ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.784577 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.794291 4881 scope.go:117] "RemoveContainer" containerID="8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.818668 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-utilities\") pod \"67ea8d33-d11e-420e-b566-8d0c2301ce94\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.818726 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-operator-metrics\") pod \"7157505d-d18a-42a4-8037-96ad9a7825ce\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.818766 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-catalog-content\") pod \"67ea8d33-d11e-420e-b566-8d0c2301ce94\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.818812 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-trusted-ca\") pod \"7157505d-d18a-42a4-8037-96ad9a7825ce\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.818839 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbwfz\" (UniqueName: \"kubernetes.io/projected/67ea8d33-d11e-420e-b566-8d0c2301ce94-kube-api-access-vbwfz\") pod \"67ea8d33-d11e-420e-b566-8d0c2301ce94\" (UID: \"67ea8d33-d11e-420e-b566-8d0c2301ce94\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.818864 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz2dn\" (UniqueName: \"kubernetes.io/projected/7157505d-d18a-42a4-8037-96ad9a7825ce-kube-api-access-sz2dn\") pod \"7157505d-d18a-42a4-8037-96ad9a7825ce\" (UID: \"7157505d-d18a-42a4-8037-96ad9a7825ce\") " Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.819050 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c91a464-e748-4f02-9aab-d89a0076cb8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.819631 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.821069 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7157505d-d18a-42a4-8037-96ad9a7825ce" (UID: "7157505d-d18a-42a4-8037-96ad9a7825ce"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.821904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7157505d-d18a-42a4-8037-96ad9a7825ce-kube-api-access-sz2dn" (OuterVolumeSpecName: "kube-api-access-sz2dn") pod "7157505d-d18a-42a4-8037-96ad9a7825ce" (UID: "7157505d-d18a-42a4-8037-96ad9a7825ce"). InnerVolumeSpecName "kube-api-access-sz2dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.823325 4881 scope.go:117] "RemoveContainer" containerID="797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.823724 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ea8d33-d11e-420e-b566-8d0c2301ce94-kube-api-access-vbwfz" (OuterVolumeSpecName: "kube-api-access-vbwfz") pod "67ea8d33-d11e-420e-b566-8d0c2301ce94" (UID: "67ea8d33-d11e-420e-b566-8d0c2301ce94"). InnerVolumeSpecName "kube-api-access-vbwfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.826673 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7157505d-d18a-42a4-8037-96ad9a7825ce" (UID: "7157505d-d18a-42a4-8037-96ad9a7825ce"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: E0126 12:41:53.827952 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030\": container with ID starting with 797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030 not found: ID does not exist" containerID="797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.828327 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030"} err="failed to get container status \"797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030\": rpc error: code = NotFound desc = could not find container \"797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030\": container with ID starting with 797ceacc17d3fa9be268b289489ef1b7885988ebf2ff2af55deecc500227f030 not found: ID does not exist" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.828770 4881 scope.go:117] "RemoveContainer" containerID="ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.830803 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-utilities" (OuterVolumeSpecName: "utilities") pod "67ea8d33-d11e-420e-b566-8d0c2301ce94" (UID: "67ea8d33-d11e-420e-b566-8d0c2301ce94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: E0126 12:41:53.834895 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2\": container with ID starting with ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2 not found: ID does not exist" containerID="ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.834969 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2"} err="failed to get container status \"ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2\": rpc error: code = NotFound desc = could not find container \"ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2\": container with ID starting with ae71a9b09c044d22cfcd9cb9e96bedf25ee5b6f4fd6d915441ad713d0a158ed2 not found: ID does not exist" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.835006 4881 scope.go:117] "RemoveContainer" containerID="8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf" Jan 26 12:41:53 crc kubenswrapper[4881]: E0126 12:41:53.837044 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf\": container with ID starting with 8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf not found: ID does not exist" containerID="8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.837078 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf"} err="failed to get container status \"8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf\": rpc error: code = NotFound desc = could not find container \"8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf\": container with ID starting with 8a876ee18c128f1b9135115f0183a594bdacb4a6aefd9c3c441c54ce91b72daf not found: ID does not exist" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.849354 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67ea8d33-d11e-420e-b566-8d0c2301ce94" (UID: "67ea8d33-d11e-420e-b566-8d0c2301ce94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.919952 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.919986 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.919997 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ea8d33-d11e-420e-b566-8d0c2301ce94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.920006 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7157505d-d18a-42a4-8037-96ad9a7825ce-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.920014 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbwfz\" (UniqueName: \"kubernetes.io/projected/67ea8d33-d11e-420e-b566-8d0c2301ce94-kube-api-access-vbwfz\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.920023 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz2dn\" (UniqueName: \"kubernetes.io/projected/7157505d-d18a-42a4-8037-96ad9a7825ce-kube-api-access-sz2dn\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.991702 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v4z92"] Jan 26 12:41:53 crc kubenswrapper[4881]: I0126 12:41:53.995322 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v4z92"] Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.021211 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-utilities\") pod \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.021347 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hknp5\" (UniqueName: \"kubernetes.io/projected/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-kube-api-access-hknp5\") pod \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.021392 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-catalog-content\") pod \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\" (UID: \"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e\") " Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.022810 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-utilities" (OuterVolumeSpecName: "utilities") pod "19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" (UID: "19d4e6cf-8b9f-45ce-b93a-af4e9957b93e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.025659 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-kube-api-access-hknp5" (OuterVolumeSpecName: "kube-api-access-hknp5") pod "19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" (UID: "19d4e6cf-8b9f-45ce-b93a-af4e9957b93e"). InnerVolumeSpecName "kube-api-access-hknp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.078383 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" (UID: "19d4e6cf-8b9f-45ce-b93a-af4e9957b93e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.087841 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" path="/var/lib/kubelet/pods/7c91a464-e748-4f02-9aab-d89a0076cb8d/volumes" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.088856 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" path="/var/lib/kubelet/pods/85dcb696-76f6-47f5-aaef-12b0ebc2d8c1/volumes" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.123140 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hknp5\" (UniqueName: \"kubernetes.io/projected/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-kube-api-access-hknp5\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.123619 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.123657 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.702813 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" event={"ID":"7157505d-d18a-42a4-8037-96ad9a7825ce","Type":"ContainerDied","Data":"729dd497508ee615581b3a039285e39d222671d48a4b30a28f9c71feb7f79f3c"} Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.702834 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-drv9q" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.703159 4881 scope.go:117] "RemoveContainer" containerID="02ecd3da27e858bf8b74970c57d8876aca3a0ab2dd7be59babeaeafbe580b60e" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.709848 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" event={"ID":"c3ce8c88-e7f5-461d-ad61-e035c0ca7631","Type":"ContainerStarted","Data":"6f89e2288135ac19cd1b2aa18f39970c8f2c3cc7026c0658740f14e22e70a0bd"} Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.713358 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbt8" event={"ID":"19d4e6cf-8b9f-45ce-b93a-af4e9957b93e","Type":"ContainerDied","Data":"1f9bc593b7bd37b48c2c758b54edcb987d11e7a7888c834e60f56fbd51586664"} Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.713385 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbt8" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.718407 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxw6m" event={"ID":"67ea8d33-d11e-420e-b566-8d0c2301ce94","Type":"ContainerDied","Data":"8883025e3ed785dfbdd7bfa72c310e8452ce1cd4814e9c6051ffe82e2761e988"} Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.718554 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxw6m" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.731786 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drv9q"] Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.738673 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-drv9q"] Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.742587 4881 scope.go:117] "RemoveContainer" containerID="fba0257fb386aa86437e20a8617b6d222e147ae2445183dd162813c143ccb68d" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.754755 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxw6m"] Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.761667 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxw6m"] Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.765324 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxbt8"] Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.768652 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lxbt8"] Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.770812 4881 scope.go:117] "RemoveContainer" containerID="a2e9bce8e2fd2eca3bc0284df3a79d36a93851e5d2519d0f85cc8854783de2de" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.786710 4881 scope.go:117] "RemoveContainer" containerID="2e0fe96a4a93d74438f783b72424fe8f2a774d62290fec43775a79bf01808033" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.789801 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.789873 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.815975 4881 scope.go:117] "RemoveContainer" containerID="443fd1e7d97536e4e7c7243442f873138001b1b918d9ea75104056d92d0f60e3" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.838031 4881 scope.go:117] "RemoveContainer" containerID="335d5c0c2174e12f622f9dadd2f95c6e8141ab38b7865174b63c3eacc663d8dd" Jan 26 12:41:54 crc kubenswrapper[4881]: I0126 12:41:54.852249 4881 scope.go:117] "RemoveContainer" containerID="f94d3a2ed73a1c5ed3e1660606b5ebdae83d64d417ef051b560e400107e1c7db" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.669729 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j57t5"] Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.669988 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerName="extract-utilities" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670021 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerName="extract-utilities" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670041 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7157505d-d18a-42a4-8037-96ad9a7825ce" containerName="marketplace-operator" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670054 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7157505d-d18a-42a4-8037-96ad9a7825ce" containerName="marketplace-operator" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670072 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670085 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670108 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerName="extract-utilities" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670123 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerName="extract-utilities" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670145 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670157 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670179 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerName="extract-content" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670191 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerName="extract-content" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670208 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670219 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670238 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerName="extract-content" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670249 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerName="extract-content" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670269 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerName="extract-content" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670280 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerName="extract-content" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670297 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerName="extract-content" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670309 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerName="extract-content" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670329 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670342 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670355 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerName="extract-utilities" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670366 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerName="extract-utilities" Jan 26 12:41:55 crc kubenswrapper[4881]: E0126 12:41:55.670382 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerName="extract-utilities" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670394 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerName="extract-utilities" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670596 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c91a464-e748-4f02-9aab-d89a0076cb8d" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670617 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670646 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670663 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="85dcb696-76f6-47f5-aaef-12b0ebc2d8c1" containerName="registry-server" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.670686 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7157505d-d18a-42a4-8037-96ad9a7825ce" containerName="marketplace-operator" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.672001 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.674341 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.693114 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j57t5"] Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.850381 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7492c80-8cb7-4b48-95c7-ecec74b07dc3-catalog-content\") pod \"certified-operators-j57t5\" (UID: \"b7492c80-8cb7-4b48-95c7-ecec74b07dc3\") " pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.850435 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzq2\" (UniqueName: \"kubernetes.io/projected/b7492c80-8cb7-4b48-95c7-ecec74b07dc3-kube-api-access-hwzq2\") pod \"certified-operators-j57t5\" (UID: \"b7492c80-8cb7-4b48-95c7-ecec74b07dc3\") " pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.850509 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7492c80-8cb7-4b48-95c7-ecec74b07dc3-utilities\") pod \"certified-operators-j57t5\" (UID: \"b7492c80-8cb7-4b48-95c7-ecec74b07dc3\") " pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.952031 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7492c80-8cb7-4b48-95c7-ecec74b07dc3-utilities\") pod \"certified-operators-j57t5\" (UID: \"b7492c80-8cb7-4b48-95c7-ecec74b07dc3\") " pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.952393 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7492c80-8cb7-4b48-95c7-ecec74b07dc3-catalog-content\") pod \"certified-operators-j57t5\" (UID: \"b7492c80-8cb7-4b48-95c7-ecec74b07dc3\") " pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.952528 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwzq2\" (UniqueName: \"kubernetes.io/projected/b7492c80-8cb7-4b48-95c7-ecec74b07dc3-kube-api-access-hwzq2\") pod \"certified-operators-j57t5\" (UID: \"b7492c80-8cb7-4b48-95c7-ecec74b07dc3\") " pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.952906 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7492c80-8cb7-4b48-95c7-ecec74b07dc3-catalog-content\") pod \"certified-operators-j57t5\" (UID: \"b7492c80-8cb7-4b48-95c7-ecec74b07dc3\") " pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.952924 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7492c80-8cb7-4b48-95c7-ecec74b07dc3-utilities\") pod \"certified-operators-j57t5\" (UID: \"b7492c80-8cb7-4b48-95c7-ecec74b07dc3\") " pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.977158 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwzq2\" (UniqueName: \"kubernetes.io/projected/b7492c80-8cb7-4b48-95c7-ecec74b07dc3-kube-api-access-hwzq2\") pod \"certified-operators-j57t5\" (UID: \"b7492c80-8cb7-4b48-95c7-ecec74b07dc3\") " pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:55 crc kubenswrapper[4881]: I0126 12:41:55.996423 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.091398 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d4e6cf-8b9f-45ce-b93a-af4e9957b93e" path="/var/lib/kubelet/pods/19d4e6cf-8b9f-45ce-b93a-af4e9957b93e/volumes" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.092229 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ea8d33-d11e-420e-b566-8d0c2301ce94" path="/var/lib/kubelet/pods/67ea8d33-d11e-420e-b566-8d0c2301ce94/volumes" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.093054 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7157505d-d18a-42a4-8037-96ad9a7825ce" path="/var/lib/kubelet/pods/7157505d-d18a-42a4-8037-96ad9a7825ce/volumes" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.265175 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rghg7"] Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.266912 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.269991 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.287977 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rghg7"] Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.460207 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66cbadc1-43e8-44b8-a92b-87c37e6f895f-utilities\") pod \"redhat-operators-rghg7\" (UID: \"66cbadc1-43e8-44b8-a92b-87c37e6f895f\") " pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.460245 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66cbadc1-43e8-44b8-a92b-87c37e6f895f-catalog-content\") pod \"redhat-operators-rghg7\" (UID: \"66cbadc1-43e8-44b8-a92b-87c37e6f895f\") " pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.460277 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6qvt\" (UniqueName: \"kubernetes.io/projected/66cbadc1-43e8-44b8-a92b-87c37e6f895f-kube-api-access-m6qvt\") pod \"redhat-operators-rghg7\" (UID: \"66cbadc1-43e8-44b8-a92b-87c37e6f895f\") " pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.468239 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j57t5"] Jan 26 12:41:56 crc kubenswrapper[4881]: W0126 12:41:56.478099 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7492c80_8cb7_4b48_95c7_ecec74b07dc3.slice/crio-1da123535a68792ce4d0ca4145b23a9a591971745ef12a1eeca76399f976932f WatchSource:0}: Error finding container 1da123535a68792ce4d0ca4145b23a9a591971745ef12a1eeca76399f976932f: Status 404 returned error can't find the container with id 1da123535a68792ce4d0ca4145b23a9a591971745ef12a1eeca76399f976932f Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.561407 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66cbadc1-43e8-44b8-a92b-87c37e6f895f-catalog-content\") pod \"redhat-operators-rghg7\" (UID: \"66cbadc1-43e8-44b8-a92b-87c37e6f895f\") " pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.561767 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66cbadc1-43e8-44b8-a92b-87c37e6f895f-utilities\") pod \"redhat-operators-rghg7\" (UID: \"66cbadc1-43e8-44b8-a92b-87c37e6f895f\") " pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.561803 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6qvt\" (UniqueName: \"kubernetes.io/projected/66cbadc1-43e8-44b8-a92b-87c37e6f895f-kube-api-access-m6qvt\") pod \"redhat-operators-rghg7\" (UID: \"66cbadc1-43e8-44b8-a92b-87c37e6f895f\") " pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.561877 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66cbadc1-43e8-44b8-a92b-87c37e6f895f-catalog-content\") pod \"redhat-operators-rghg7\" (UID: \"66cbadc1-43e8-44b8-a92b-87c37e6f895f\") " pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.562257 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66cbadc1-43e8-44b8-a92b-87c37e6f895f-utilities\") pod \"redhat-operators-rghg7\" (UID: \"66cbadc1-43e8-44b8-a92b-87c37e6f895f\") " pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.583820 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6qvt\" (UniqueName: \"kubernetes.io/projected/66cbadc1-43e8-44b8-a92b-87c37e6f895f-kube-api-access-m6qvt\") pod \"redhat-operators-rghg7\" (UID: \"66cbadc1-43e8-44b8-a92b-87c37e6f895f\") " pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.592701 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.735754 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57t5" event={"ID":"b7492c80-8cb7-4b48-95c7-ecec74b07dc3","Type":"ContainerStarted","Data":"1da123535a68792ce4d0ca4145b23a9a591971745ef12a1eeca76399f976932f"} Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.736001 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.739041 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.754814 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ghn75" podStartSLOduration=4.7547909189999995 podStartE2EDuration="4.754790919s" podCreationTimestamp="2026-01-26 12:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:41:56.749808318 +0000 UTC m=+389.229118344" watchObservedRunningTime="2026-01-26 12:41:56.754790919 +0000 UTC m=+389.234100945" Jan 26 12:41:56 crc kubenswrapper[4881]: I0126 12:41:56.953657 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rghg7"] Jan 26 12:41:56 crc kubenswrapper[4881]: W0126 12:41:56.965788 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66cbadc1_43e8_44b8_a92b_87c37e6f895f.slice/crio-7ddd6ed89c2044b5cf0d0fa8d838b3649d2869493eee29b9d50206df1d4cc4be WatchSource:0}: Error finding container 7ddd6ed89c2044b5cf0d0fa8d838b3649d2869493eee29b9d50206df1d4cc4be: Status 404 returned error can't find the container with id 7ddd6ed89c2044b5cf0d0fa8d838b3649d2869493eee29b9d50206df1d4cc4be Jan 26 12:41:57 crc kubenswrapper[4881]: I0126 12:41:57.744086 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rghg7" event={"ID":"66cbadc1-43e8-44b8-a92b-87c37e6f895f","Type":"ContainerStarted","Data":"a094b35d885531b2cbeae5d5810343b0a6633cb7c48393b6d7ff89f980e7b602"} Jan 26 12:41:57 crc kubenswrapper[4881]: I0126 12:41:57.744330 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rghg7" event={"ID":"66cbadc1-43e8-44b8-a92b-87c37e6f895f","Type":"ContainerStarted","Data":"7ddd6ed89c2044b5cf0d0fa8d838b3649d2869493eee29b9d50206df1d4cc4be"} Jan 26 12:41:57 crc kubenswrapper[4881]: I0126 12:41:57.745629 4881 generic.go:334] "Generic (PLEG): container finished" podID="b7492c80-8cb7-4b48-95c7-ecec74b07dc3" containerID="6e69a6be059b8b91a13796e9fe6895819a4eed29d8e8d54ec43a4bac84035cfc" exitCode=0 Jan 26 12:41:57 crc kubenswrapper[4881]: I0126 12:41:57.746211 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57t5" event={"ID":"b7492c80-8cb7-4b48-95c7-ecec74b07dc3","Type":"ContainerDied","Data":"6e69a6be059b8b91a13796e9fe6895819a4eed29d8e8d54ec43a4bac84035cfc"} Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.072672 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-892tg"] Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.077120 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.086374 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.110394 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-892tg"] Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.208398 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4k8s\" (UniqueName: \"kubernetes.io/projected/a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2-kube-api-access-c4k8s\") pod \"community-operators-892tg\" (UID: \"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2\") " pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.209642 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2-catalog-content\") pod \"community-operators-892tg\" (UID: \"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2\") " pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.209769 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2-utilities\") pod \"community-operators-892tg\" (UID: \"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2\") " pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.310602 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4k8s\" (UniqueName: \"kubernetes.io/projected/a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2-kube-api-access-c4k8s\") pod \"community-operators-892tg\" (UID: \"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2\") " pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.310658 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2-catalog-content\") pod \"community-operators-892tg\" (UID: \"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2\") " pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.310687 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2-utilities\") pod \"community-operators-892tg\" (UID: \"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2\") " pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.311096 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2-catalog-content\") pod \"community-operators-892tg\" (UID: \"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2\") " pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.311252 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2-utilities\") pod \"community-operators-892tg\" (UID: \"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2\") " pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.331611 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4k8s\" (UniqueName: \"kubernetes.io/projected/a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2-kube-api-access-c4k8s\") pod \"community-operators-892tg\" (UID: \"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2\") " pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.411199 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-892tg" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.671975 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-22x8z"] Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.673081 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.674889 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.678485 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-22x8z"] Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.774957 4881 generic.go:334] "Generic (PLEG): container finished" podID="66cbadc1-43e8-44b8-a92b-87c37e6f895f" containerID="a094b35d885531b2cbeae5d5810343b0a6633cb7c48393b6d7ff89f980e7b602" exitCode=0 Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.775075 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rghg7" event={"ID":"66cbadc1-43e8-44b8-a92b-87c37e6f895f","Type":"ContainerDied","Data":"a094b35d885531b2cbeae5d5810343b0a6633cb7c48393b6d7ff89f980e7b602"} Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.815803 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3087aa-1e19-49ef-8d77-17654472881a-utilities\") pod \"redhat-marketplace-22x8z\" (UID: \"de3087aa-1e19-49ef-8d77-17654472881a\") " pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.816169 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn4cx\" (UniqueName: \"kubernetes.io/projected/de3087aa-1e19-49ef-8d77-17654472881a-kube-api-access-xn4cx\") pod \"redhat-marketplace-22x8z\" (UID: \"de3087aa-1e19-49ef-8d77-17654472881a\") " pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.816213 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3087aa-1e19-49ef-8d77-17654472881a-catalog-content\") pod \"redhat-marketplace-22x8z\" (UID: \"de3087aa-1e19-49ef-8d77-17654472881a\") " pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.832581 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-892tg"] Jan 26 12:41:58 crc kubenswrapper[4881]: W0126 12:41:58.841143 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5b1bc2d_d349_4de7_bcf8_52fc979a3ac2.slice/crio-1ca40c88bfa883e6fc974403fc023f64dbf490598486cd434eedaa47b30557ee WatchSource:0}: Error finding container 1ca40c88bfa883e6fc974403fc023f64dbf490598486cd434eedaa47b30557ee: Status 404 returned error can't find the container with id 1ca40c88bfa883e6fc974403fc023f64dbf490598486cd434eedaa47b30557ee Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.917475 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3087aa-1e19-49ef-8d77-17654472881a-catalog-content\") pod \"redhat-marketplace-22x8z\" (UID: \"de3087aa-1e19-49ef-8d77-17654472881a\") " pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.917683 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3087aa-1e19-49ef-8d77-17654472881a-utilities\") pod \"redhat-marketplace-22x8z\" (UID: \"de3087aa-1e19-49ef-8d77-17654472881a\") " pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.917724 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn4cx\" (UniqueName: \"kubernetes.io/projected/de3087aa-1e19-49ef-8d77-17654472881a-kube-api-access-xn4cx\") pod \"redhat-marketplace-22x8z\" (UID: \"de3087aa-1e19-49ef-8d77-17654472881a\") " pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.918134 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3087aa-1e19-49ef-8d77-17654472881a-catalog-content\") pod \"redhat-marketplace-22x8z\" (UID: \"de3087aa-1e19-49ef-8d77-17654472881a\") " pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.918143 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3087aa-1e19-49ef-8d77-17654472881a-utilities\") pod \"redhat-marketplace-22x8z\" (UID: \"de3087aa-1e19-49ef-8d77-17654472881a\") " pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.942017 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn4cx\" (UniqueName: \"kubernetes.io/projected/de3087aa-1e19-49ef-8d77-17654472881a-kube-api-access-xn4cx\") pod \"redhat-marketplace-22x8z\" (UID: \"de3087aa-1e19-49ef-8d77-17654472881a\") " pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:58 crc kubenswrapper[4881]: I0126 12:41:58.991074 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:41:59 crc kubenswrapper[4881]: I0126 12:41:59.477014 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-22x8z"] Jan 26 12:41:59 crc kubenswrapper[4881]: W0126 12:41:59.487128 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde3087aa_1e19_49ef_8d77_17654472881a.slice/crio-284d4595dee5f7c5a7e2e87218c5b881e70889c14619f02f26972e729202ff1b WatchSource:0}: Error finding container 284d4595dee5f7c5a7e2e87218c5b881e70889c14619f02f26972e729202ff1b: Status 404 returned error can't find the container with id 284d4595dee5f7c5a7e2e87218c5b881e70889c14619f02f26972e729202ff1b Jan 26 12:41:59 crc kubenswrapper[4881]: I0126 12:41:59.781809 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22x8z" event={"ID":"de3087aa-1e19-49ef-8d77-17654472881a","Type":"ContainerStarted","Data":"284d4595dee5f7c5a7e2e87218c5b881e70889c14619f02f26972e729202ff1b"} Jan 26 12:41:59 crc kubenswrapper[4881]: I0126 12:41:59.782720 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-892tg" event={"ID":"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2","Type":"ContainerStarted","Data":"1ca40c88bfa883e6fc974403fc023f64dbf490598486cd434eedaa47b30557ee"} Jan 26 12:42:00 crc kubenswrapper[4881]: I0126 12:42:00.790939 4881 generic.go:334] "Generic (PLEG): container finished" podID="de3087aa-1e19-49ef-8d77-17654472881a" containerID="b57e54a669b89c377954d79c342accbe054411d69a51cc86cdcc9ac820e1fec0" exitCode=0 Jan 26 12:42:00 crc kubenswrapper[4881]: I0126 12:42:00.791059 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22x8z" event={"ID":"de3087aa-1e19-49ef-8d77-17654472881a","Type":"ContainerDied","Data":"b57e54a669b89c377954d79c342accbe054411d69a51cc86cdcc9ac820e1fec0"} Jan 26 12:42:00 crc kubenswrapper[4881]: I0126 12:42:00.793127 4881 generic.go:334] "Generic (PLEG): container finished" podID="a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2" containerID="4e8ac6e21714cde45ee35bc9b5552c52e6a608e1cb73a6916b4bb2e245191ec7" exitCode=0 Jan 26 12:42:00 crc kubenswrapper[4881]: I0126 12:42:00.793192 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-892tg" event={"ID":"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2","Type":"ContainerDied","Data":"4e8ac6e21714cde45ee35bc9b5552c52e6a608e1cb73a6916b4bb2e245191ec7"} Jan 26 12:42:09 crc kubenswrapper[4881]: I0126 12:42:09.352818 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs"] Jan 26 12:42:09 crc kubenswrapper[4881]: I0126 12:42:09.355139 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" podUID="f03ad41a-5524-406a-938b-a54266ebd7a0" containerName="route-controller-manager" containerID="cri-o://f204cf1c0debf2cfd5e70bef92895c772d77ad445998aaefe712318998dec6bf" gracePeriod=30 Jan 26 12:42:10 crc kubenswrapper[4881]: I0126 12:42:10.854874 4881 generic.go:334] "Generic (PLEG): container finished" podID="f03ad41a-5524-406a-938b-a54266ebd7a0" containerID="f204cf1c0debf2cfd5e70bef92895c772d77ad445998aaefe712318998dec6bf" exitCode=0 Jan 26 12:42:10 crc kubenswrapper[4881]: I0126 12:42:10.854973 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" event={"ID":"f03ad41a-5524-406a-938b-a54266ebd7a0","Type":"ContainerDied","Data":"f204cf1c0debf2cfd5e70bef92895c772d77ad445998aaefe712318998dec6bf"} Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.092439 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.127487 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-config\") pod \"f03ad41a-5524-406a-938b-a54266ebd7a0\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.127730 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-client-ca\") pod \"f03ad41a-5524-406a-938b-a54266ebd7a0\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.127793 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03ad41a-5524-406a-938b-a54266ebd7a0-serving-cert\") pod \"f03ad41a-5524-406a-938b-a54266ebd7a0\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.127835 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtg25\" (UniqueName: \"kubernetes.io/projected/f03ad41a-5524-406a-938b-a54266ebd7a0-kube-api-access-xtg25\") pod \"f03ad41a-5524-406a-938b-a54266ebd7a0\" (UID: \"f03ad41a-5524-406a-938b-a54266ebd7a0\") " Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.130000 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-config" (OuterVolumeSpecName: "config") pod "f03ad41a-5524-406a-938b-a54266ebd7a0" (UID: "f03ad41a-5524-406a-938b-a54266ebd7a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.130781 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-client-ca" (OuterVolumeSpecName: "client-ca") pod "f03ad41a-5524-406a-938b-a54266ebd7a0" (UID: "f03ad41a-5524-406a-938b-a54266ebd7a0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.131251 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf"] Jan 26 12:42:11 crc kubenswrapper[4881]: E0126 12:42:11.131587 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03ad41a-5524-406a-938b-a54266ebd7a0" containerName="route-controller-manager" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.131604 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03ad41a-5524-406a-938b-a54266ebd7a0" containerName="route-controller-manager" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.131736 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03ad41a-5524-406a-938b-a54266ebd7a0" containerName="route-controller-manager" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.132284 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.152969 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03ad41a-5524-406a-938b-a54266ebd7a0-kube-api-access-xtg25" (OuterVolumeSpecName: "kube-api-access-xtg25") pod "f03ad41a-5524-406a-938b-a54266ebd7a0" (UID: "f03ad41a-5524-406a-938b-a54266ebd7a0"). InnerVolumeSpecName "kube-api-access-xtg25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.153444 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03ad41a-5524-406a-938b-a54266ebd7a0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f03ad41a-5524-406a-938b-a54266ebd7a0" (UID: "f03ad41a-5524-406a-938b-a54266ebd7a0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.162202 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf"] Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.229581 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dfa5931-6fc9-4184-bd70-97158de811d7-client-ca\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.229636 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dfa5931-6fc9-4184-bd70-97158de811d7-config\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.229718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dfa5931-6fc9-4184-bd70-97158de811d7-serving-cert\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.229755 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62vkd\" (UniqueName: \"kubernetes.io/projected/0dfa5931-6fc9-4184-bd70-97158de811d7-kube-api-access-62vkd\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.229823 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.229837 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03ad41a-5524-406a-938b-a54266ebd7a0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.229848 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtg25\" (UniqueName: \"kubernetes.io/projected/f03ad41a-5524-406a-938b-a54266ebd7a0-kube-api-access-xtg25\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.229858 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03ad41a-5524-406a-938b-a54266ebd7a0-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.331203 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dfa5931-6fc9-4184-bd70-97158de811d7-client-ca\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.331266 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dfa5931-6fc9-4184-bd70-97158de811d7-config\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.331331 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dfa5931-6fc9-4184-bd70-97158de811d7-serving-cert\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.331372 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62vkd\" (UniqueName: \"kubernetes.io/projected/0dfa5931-6fc9-4184-bd70-97158de811d7-kube-api-access-62vkd\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.332832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dfa5931-6fc9-4184-bd70-97158de811d7-client-ca\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.333948 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dfa5931-6fc9-4184-bd70-97158de811d7-config\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.336224 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dfa5931-6fc9-4184-bd70-97158de811d7-serving-cert\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.350538 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62vkd\" (UniqueName: \"kubernetes.io/projected/0dfa5931-6fc9-4184-bd70-97158de811d7-kube-api-access-62vkd\") pod \"route-controller-manager-7959bf8dcd-wvjvf\" (UID: \"0dfa5931-6fc9-4184-bd70-97158de811d7\") " pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.400713 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jqg2h" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.456359 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dr6gf"] Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.491930 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.862923 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" event={"ID":"f03ad41a-5524-406a-938b-a54266ebd7a0","Type":"ContainerDied","Data":"35c025e1d5decabf020eb331484420a3f9a0bb0c1c249a05a7528a26821218c9"} Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.863231 4881 scope.go:117] "RemoveContainer" containerID="f204cf1c0debf2cfd5e70bef92895c772d77ad445998aaefe712318998dec6bf" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.863360 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs" Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.901445 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs"] Jan 26 12:42:11 crc kubenswrapper[4881]: I0126 12:42:11.907704 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfb766cdf-7n5cs"] Jan 26 12:42:12 crc kubenswrapper[4881]: I0126 12:42:12.100009 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03ad41a-5524-406a-938b-a54266ebd7a0" path="/var/lib/kubelet/pods/f03ad41a-5524-406a-938b-a54266ebd7a0/volumes" Jan 26 12:42:12 crc kubenswrapper[4881]: I0126 12:42:12.189032 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf"] Jan 26 12:42:12 crc kubenswrapper[4881]: W0126 12:42:12.197667 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dfa5931_6fc9_4184_bd70_97158de811d7.slice/crio-d4337edef92bcb6c08ed994ee031517654ac2910c89e0052c48527c972186e6f WatchSource:0}: Error finding container d4337edef92bcb6c08ed994ee031517654ac2910c89e0052c48527c972186e6f: Status 404 returned error can't find the container with id d4337edef92bcb6c08ed994ee031517654ac2910c89e0052c48527c972186e6f Jan 26 12:42:12 crc kubenswrapper[4881]: I0126 12:42:12.871345 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57t5" event={"ID":"b7492c80-8cb7-4b48-95c7-ecec74b07dc3","Type":"ContainerStarted","Data":"eccb06bcef98b94a871884f27d1b4d20a99f686feb1a36a215cde6f16d70251c"} Jan 26 12:42:12 crc kubenswrapper[4881]: I0126 12:42:12.876142 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" event={"ID":"0dfa5931-6fc9-4184-bd70-97158de811d7","Type":"ContainerStarted","Data":"d4337edef92bcb6c08ed994ee031517654ac2910c89e0052c48527c972186e6f"} Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.883703 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" event={"ID":"0dfa5931-6fc9-4184-bd70-97158de811d7","Type":"ContainerStarted","Data":"3af3666b2ba9b018c0fd6a4797186114c43496e4912efbd231c0e143eda2f417"} Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.884977 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.887450 4881 generic.go:334] "Generic (PLEG): container finished" podID="a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2" containerID="66944fa2dab77c47d581c5b9fca67b2dbea7934057b7f15846ba8a0092ef71ca" exitCode=0 Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.887501 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-892tg" event={"ID":"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2","Type":"ContainerDied","Data":"66944fa2dab77c47d581c5b9fca67b2dbea7934057b7f15846ba8a0092ef71ca"} Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.890204 4881 generic.go:334] "Generic (PLEG): container finished" podID="66cbadc1-43e8-44b8-a92b-87c37e6f895f" containerID="4f2af412ea65f2dfebc79a80dfc1109fdcae0ab47fba2f6b100ad1eea1a702dd" exitCode=0 Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.890264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rghg7" event={"ID":"66cbadc1-43e8-44b8-a92b-87c37e6f895f","Type":"ContainerDied","Data":"4f2af412ea65f2dfebc79a80dfc1109fdcae0ab47fba2f6b100ad1eea1a702dd"} Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.893121 4881 generic.go:334] "Generic (PLEG): container finished" podID="b7492c80-8cb7-4b48-95c7-ecec74b07dc3" containerID="eccb06bcef98b94a871884f27d1b4d20a99f686feb1a36a215cde6f16d70251c" exitCode=0 Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.893179 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57t5" event={"ID":"b7492c80-8cb7-4b48-95c7-ecec74b07dc3","Type":"ContainerDied","Data":"eccb06bcef98b94a871884f27d1b4d20a99f686feb1a36a215cde6f16d70251c"} Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.895037 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.897168 4881 generic.go:334] "Generic (PLEG): container finished" podID="de3087aa-1e19-49ef-8d77-17654472881a" containerID="8044882c7443a3f2a4b8ae85e317c88464eec9d5b4007be231344f472def28e7" exitCode=0 Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.897216 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22x8z" event={"ID":"de3087aa-1e19-49ef-8d77-17654472881a","Type":"ContainerDied","Data":"8044882c7443a3f2a4b8ae85e317c88464eec9d5b4007be231344f472def28e7"} Jan 26 12:42:13 crc kubenswrapper[4881]: I0126 12:42:13.906968 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7959bf8dcd-wvjvf" podStartSLOduration=4.906939775 podStartE2EDuration="4.906939775s" podCreationTimestamp="2026-01-26 12:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:42:13.905358726 +0000 UTC m=+406.384668832" watchObservedRunningTime="2026-01-26 12:42:13.906939775 +0000 UTC m=+406.386249801" Jan 26 12:42:14 crc kubenswrapper[4881]: I0126 12:42:14.904680 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rghg7" event={"ID":"66cbadc1-43e8-44b8-a92b-87c37e6f895f","Type":"ContainerStarted","Data":"e3e6e2f2b222c29f7acc28780a9067d201ca8380731556bc27e81b1aefeda4c5"} Jan 26 12:42:14 crc kubenswrapper[4881]: I0126 12:42:14.928152 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rghg7" podStartSLOduration=3.31221994 podStartE2EDuration="18.928121731s" podCreationTimestamp="2026-01-26 12:41:56 +0000 UTC" firstStartedPulling="2026-01-26 12:41:58.776644417 +0000 UTC m=+391.255954443" lastFinishedPulling="2026-01-26 12:42:14.392546198 +0000 UTC m=+406.871856234" observedRunningTime="2026-01-26 12:42:14.922890463 +0000 UTC m=+407.402200489" watchObservedRunningTime="2026-01-26 12:42:14.928121731 +0000 UTC m=+407.407431757" Jan 26 12:42:15 crc kubenswrapper[4881]: I0126 12:42:15.909981 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22x8z" event={"ID":"de3087aa-1e19-49ef-8d77-17654472881a","Type":"ContainerStarted","Data":"371a2a727e1c1df2ffc3d335dd6ce8aa775b72c2d8324d3325e71d7d2cc73af8"} Jan 26 12:42:15 crc kubenswrapper[4881]: I0126 12:42:15.912421 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-892tg" event={"ID":"a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2","Type":"ContainerStarted","Data":"05a642c293346517dca93722779e1c0a19544d38f8c545985d94aaf4b925a25e"} Jan 26 12:42:15 crc kubenswrapper[4881]: I0126 12:42:15.914491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57t5" event={"ID":"b7492c80-8cb7-4b48-95c7-ecec74b07dc3","Type":"ContainerStarted","Data":"d25942e0b092942d3b6f89089a097263b0c61e6a31d6e9913f0800f4f7c2fe4d"} Jan 26 12:42:15 crc kubenswrapper[4881]: I0126 12:42:15.936195 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j57t5" podStartSLOduration=3.9214476449999998 podStartE2EDuration="20.936165997s" podCreationTimestamp="2026-01-26 12:41:55 +0000 UTC" firstStartedPulling="2026-01-26 12:41:57.750178205 +0000 UTC m=+390.229488231" lastFinishedPulling="2026-01-26 12:42:14.764896557 +0000 UTC m=+407.244206583" observedRunningTime="2026-01-26 12:42:15.933099841 +0000 UTC m=+408.412409887" watchObservedRunningTime="2026-01-26 12:42:15.936165997 +0000 UTC m=+408.415476033" Jan 26 12:42:15 crc kubenswrapper[4881]: I0126 12:42:15.997584 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:42:15 crc kubenswrapper[4881]: I0126 12:42:15.997653 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:42:16 crc kubenswrapper[4881]: I0126 12:42:16.593507 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:42:16 crc kubenswrapper[4881]: I0126 12:42:16.593576 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:42:16 crc kubenswrapper[4881]: I0126 12:42:16.952712 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-892tg" podStartSLOduration=4.92926476 podStartE2EDuration="18.952687219s" podCreationTimestamp="2026-01-26 12:41:58 +0000 UTC" firstStartedPulling="2026-01-26 12:42:00.807217563 +0000 UTC m=+393.286527629" lastFinishedPulling="2026-01-26 12:42:14.830640062 +0000 UTC m=+407.309950088" observedRunningTime="2026-01-26 12:42:16.948027085 +0000 UTC m=+409.427337121" watchObservedRunningTime="2026-01-26 12:42:16.952687219 +0000 UTC m=+409.431997255" Jan 26 12:42:17 crc kubenswrapper[4881]: I0126 12:42:17.033121 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-j57t5" podUID="b7492c80-8cb7-4b48-95c7-ecec74b07dc3" containerName="registry-server" probeResult="failure" output=< Jan 26 12:42:17 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 12:42:17 crc kubenswrapper[4881]: > Jan 26 12:42:17 crc kubenswrapper[4881]: I0126 12:42:17.641152 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rghg7" podUID="66cbadc1-43e8-44b8-a92b-87c37e6f895f" containerName="registry-server" probeResult="failure" output=< Jan 26 12:42:17 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 12:42:17 crc kubenswrapper[4881]: > Jan 26 12:42:17 crc kubenswrapper[4881]: I0126 12:42:17.951575 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-22x8z" podStartSLOduration=5.777786732 podStartE2EDuration="19.95156118s" podCreationTimestamp="2026-01-26 12:41:58 +0000 UTC" firstStartedPulling="2026-01-26 12:42:00.794483042 +0000 UTC m=+393.273793068" lastFinishedPulling="2026-01-26 12:42:14.96825749 +0000 UTC m=+407.447567516" observedRunningTime="2026-01-26 12:42:17.947554412 +0000 UTC m=+410.426864428" watchObservedRunningTime="2026-01-26 12:42:17.95156118 +0000 UTC m=+410.430871206" Jan 26 12:42:18 crc kubenswrapper[4881]: I0126 12:42:18.412355 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-892tg" Jan 26 12:42:18 crc kubenswrapper[4881]: I0126 12:42:18.412433 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-892tg" Jan 26 12:42:18 crc kubenswrapper[4881]: I0126 12:42:18.453588 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-892tg" Jan 26 12:42:18 crc kubenswrapper[4881]: I0126 12:42:18.991690 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:42:18 crc kubenswrapper[4881]: I0126 12:42:18.991788 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:42:19 crc kubenswrapper[4881]: I0126 12:42:19.042439 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:42:21 crc kubenswrapper[4881]: I0126 12:42:21.013657 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-22x8z" Jan 26 12:42:24 crc kubenswrapper[4881]: I0126 12:42:24.789819 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:42:24 crc kubenswrapper[4881]: I0126 12:42:24.790204 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:42:26 crc kubenswrapper[4881]: I0126 12:42:26.038339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:42:26 crc kubenswrapper[4881]: I0126 12:42:26.076260 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j57t5" Jan 26 12:42:26 crc kubenswrapper[4881]: I0126 12:42:26.659547 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:42:26 crc kubenswrapper[4881]: I0126 12:42:26.709577 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rghg7" Jan 26 12:42:28 crc kubenswrapper[4881]: I0126 12:42:28.464896 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-892tg" Jan 26 12:42:36 crc kubenswrapper[4881]: I0126 12:42:36.497344 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" podUID="b92eec64-c286-4244-9e62-a5cd7ab680ae" containerName="registry" containerID="cri-o://13f18f4170beb7314d9e9fa89f236544a7d706858108b2108994b76731980774" gracePeriod=30 Jan 26 12:42:40 crc kubenswrapper[4881]: I0126 12:42:40.076810 4881 generic.go:334] "Generic (PLEG): container finished" podID="b92eec64-c286-4244-9e62-a5cd7ab680ae" containerID="13f18f4170beb7314d9e9fa89f236544a7d706858108b2108994b76731980774" exitCode=0 Jan 26 12:42:40 crc kubenswrapper[4881]: I0126 12:42:40.076919 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" event={"ID":"b92eec64-c286-4244-9e62-a5cd7ab680ae","Type":"ContainerDied","Data":"13f18f4170beb7314d9e9fa89f236544a7d706858108b2108994b76731980774"} Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.338246 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.508156 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b92eec64-c286-4244-9e62-a5cd7ab680ae\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.508270 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9nzc\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-kube-api-access-w9nzc\") pod \"b92eec64-c286-4244-9e62-a5cd7ab680ae\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.508325 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b92eec64-c286-4244-9e62-a5cd7ab680ae-installation-pull-secrets\") pod \"b92eec64-c286-4244-9e62-a5cd7ab680ae\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.508382 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-tls\") pod \"b92eec64-c286-4244-9e62-a5cd7ab680ae\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.508427 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-trusted-ca\") pod \"b92eec64-c286-4244-9e62-a5cd7ab680ae\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.508475 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-bound-sa-token\") pod \"b92eec64-c286-4244-9e62-a5cd7ab680ae\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.508535 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-certificates\") pod \"b92eec64-c286-4244-9e62-a5cd7ab680ae\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.508621 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b92eec64-c286-4244-9e62-a5cd7ab680ae-ca-trust-extracted\") pod \"b92eec64-c286-4244-9e62-a5cd7ab680ae\" (UID: \"b92eec64-c286-4244-9e62-a5cd7ab680ae\") " Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.510174 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b92eec64-c286-4244-9e62-a5cd7ab680ae" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.512255 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b92eec64-c286-4244-9e62-a5cd7ab680ae" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.515051 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92eec64-c286-4244-9e62-a5cd7ab680ae-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b92eec64-c286-4244-9e62-a5cd7ab680ae" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.515424 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-kube-api-access-w9nzc" (OuterVolumeSpecName: "kube-api-access-w9nzc") pod "b92eec64-c286-4244-9e62-a5cd7ab680ae" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae"). InnerVolumeSpecName "kube-api-access-w9nzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.516491 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b92eec64-c286-4244-9e62-a5cd7ab680ae" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.521586 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b92eec64-c286-4244-9e62-a5cd7ab680ae" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.529318 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92eec64-c286-4244-9e62-a5cd7ab680ae-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b92eec64-c286-4244-9e62-a5cd7ab680ae" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.589783 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b92eec64-c286-4244-9e62-a5cd7ab680ae" (UID: "b92eec64-c286-4244-9e62-a5cd7ab680ae"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.610532 4881 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.610589 4881 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b92eec64-c286-4244-9e62-a5cd7ab680ae-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.610599 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9nzc\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-kube-api-access-w9nzc\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.610609 4881 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b92eec64-c286-4244-9e62-a5cd7ab680ae-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.610619 4881 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.610627 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92eec64-c286-4244-9e62-a5cd7ab680ae-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:42 crc kubenswrapper[4881]: I0126 12:42:42.610635 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b92eec64-c286-4244-9e62-a5cd7ab680ae-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:43 crc kubenswrapper[4881]: I0126 12:42:43.098832 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" event={"ID":"b92eec64-c286-4244-9e62-a5cd7ab680ae","Type":"ContainerDied","Data":"017beffecd221a67757f7fb92c13b5580236c43e5d1ddb9b81298b02fbe7e0e2"} Jan 26 12:42:43 crc kubenswrapper[4881]: I0126 12:42:43.098904 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dr6gf" Jan 26 12:42:43 crc kubenswrapper[4881]: I0126 12:42:43.099366 4881 scope.go:117] "RemoveContainer" containerID="13f18f4170beb7314d9e9fa89f236544a7d706858108b2108994b76731980774" Jan 26 12:42:43 crc kubenswrapper[4881]: I0126 12:42:43.145631 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dr6gf"] Jan 26 12:42:43 crc kubenswrapper[4881]: I0126 12:42:43.152205 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dr6gf"] Jan 26 12:42:44 crc kubenswrapper[4881]: I0126 12:42:44.095794 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92eec64-c286-4244-9e62-a5cd7ab680ae" path="/var/lib/kubelet/pods/b92eec64-c286-4244-9e62-a5cd7ab680ae/volumes" Jan 26 12:42:49 crc kubenswrapper[4881]: I0126 12:42:49.338503 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c9b497d69-74tvm"] Jan 26 12:42:49 crc kubenswrapper[4881]: I0126 12:42:49.339082 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" podUID="c074245a-8ee8-43e5-a6ac-42865c01f8da" containerName="controller-manager" containerID="cri-o://6b92413138e562292e90616074cdb806eec835bd108cd92c5c0030516d9d6c7f" gracePeriod=30 Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.140215 4881 generic.go:334] "Generic (PLEG): container finished" podID="c074245a-8ee8-43e5-a6ac-42865c01f8da" containerID="6b92413138e562292e90616074cdb806eec835bd108cd92c5c0030516d9d6c7f" exitCode=0 Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.140789 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" event={"ID":"c074245a-8ee8-43e5-a6ac-42865c01f8da","Type":"ContainerDied","Data":"6b92413138e562292e90616074cdb806eec835bd108cd92c5c0030516d9d6c7f"} Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.195010 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.245117 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-client-ca\") pod \"c074245a-8ee8-43e5-a6ac-42865c01f8da\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.245181 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074245a-8ee8-43e5-a6ac-42865c01f8da-serving-cert\") pod \"c074245a-8ee8-43e5-a6ac-42865c01f8da\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.245231 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-proxy-ca-bundles\") pod \"c074245a-8ee8-43e5-a6ac-42865c01f8da\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.245251 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-config\") pod \"c074245a-8ee8-43e5-a6ac-42865c01f8da\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.246121 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-config" (OuterVolumeSpecName: "config") pod "c074245a-8ee8-43e5-a6ac-42865c01f8da" (UID: "c074245a-8ee8-43e5-a6ac-42865c01f8da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.246578 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-client-ca" (OuterVolumeSpecName: "client-ca") pod "c074245a-8ee8-43e5-a6ac-42865c01f8da" (UID: "c074245a-8ee8-43e5-a6ac-42865c01f8da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.247334 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf9jc\" (UniqueName: \"kubernetes.io/projected/c074245a-8ee8-43e5-a6ac-42865c01f8da-kube-api-access-gf9jc\") pod \"c074245a-8ee8-43e5-a6ac-42865c01f8da\" (UID: \"c074245a-8ee8-43e5-a6ac-42865c01f8da\") " Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.247604 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.247621 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.248084 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c074245a-8ee8-43e5-a6ac-42865c01f8da" (UID: "c074245a-8ee8-43e5-a6ac-42865c01f8da"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.251649 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c074245a-8ee8-43e5-a6ac-42865c01f8da-kube-api-access-gf9jc" (OuterVolumeSpecName: "kube-api-access-gf9jc") pod "c074245a-8ee8-43e5-a6ac-42865c01f8da" (UID: "c074245a-8ee8-43e5-a6ac-42865c01f8da"). InnerVolumeSpecName "kube-api-access-gf9jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.256074 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c074245a-8ee8-43e5-a6ac-42865c01f8da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c074245a-8ee8-43e5-a6ac-42865c01f8da" (UID: "c074245a-8ee8-43e5-a6ac-42865c01f8da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.349180 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf9jc\" (UniqueName: \"kubernetes.io/projected/c074245a-8ee8-43e5-a6ac-42865c01f8da-kube-api-access-gf9jc\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.349970 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074245a-8ee8-43e5-a6ac-42865c01f8da-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:50 crc kubenswrapper[4881]: I0126 12:42:50.350061 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c074245a-8ee8-43e5-a6ac-42865c01f8da-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.146891 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" event={"ID":"c074245a-8ee8-43e5-a6ac-42865c01f8da","Type":"ContainerDied","Data":"f988629a2904d68eeabb41daa415831ba39cdfc97ac4937452b0a95d5eb82ecf"} Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.146935 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c9b497d69-74tvm" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.147163 4881 scope.go:117] "RemoveContainer" containerID="6b92413138e562292e90616074cdb806eec835bd108cd92c5c0030516d9d6c7f" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.165132 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b956f9588-twthx"] Jan 26 12:42:51 crc kubenswrapper[4881]: E0126 12:42:51.165365 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92eec64-c286-4244-9e62-a5cd7ab680ae" containerName="registry" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.165389 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92eec64-c286-4244-9e62-a5cd7ab680ae" containerName="registry" Jan 26 12:42:51 crc kubenswrapper[4881]: E0126 12:42:51.165410 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c074245a-8ee8-43e5-a6ac-42865c01f8da" containerName="controller-manager" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.165419 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c074245a-8ee8-43e5-a6ac-42865c01f8da" containerName="controller-manager" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.165535 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c074245a-8ee8-43e5-a6ac-42865c01f8da" containerName="controller-manager" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.165559 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92eec64-c286-4244-9e62-a5cd7ab680ae" containerName="registry" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.165982 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.168208 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.170188 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.170344 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.170422 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.170439 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.170716 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.179698 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.183918 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c9b497d69-74tvm"] Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.187962 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b956f9588-twthx"] Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.193381 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c9b497d69-74tvm"] Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.262575 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z27ml\" (UniqueName: \"kubernetes.io/projected/31be1381-1937-49a3-8d12-7e47cc3fd2ea-kube-api-access-z27ml\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.262947 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31be1381-1937-49a3-8d12-7e47cc3fd2ea-serving-cert\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.263057 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31be1381-1937-49a3-8d12-7e47cc3fd2ea-proxy-ca-bundles\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.263191 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31be1381-1937-49a3-8d12-7e47cc3fd2ea-config\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.263294 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31be1381-1937-49a3-8d12-7e47cc3fd2ea-client-ca\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.367841 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z27ml\" (UniqueName: \"kubernetes.io/projected/31be1381-1937-49a3-8d12-7e47cc3fd2ea-kube-api-access-z27ml\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.369021 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31be1381-1937-49a3-8d12-7e47cc3fd2ea-serving-cert\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.369159 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31be1381-1937-49a3-8d12-7e47cc3fd2ea-proxy-ca-bundles\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.370281 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31be1381-1937-49a3-8d12-7e47cc3fd2ea-config\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.370402 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31be1381-1937-49a3-8d12-7e47cc3fd2ea-client-ca\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.370193 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31be1381-1937-49a3-8d12-7e47cc3fd2ea-proxy-ca-bundles\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.372237 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31be1381-1937-49a3-8d12-7e47cc3fd2ea-client-ca\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.373154 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31be1381-1937-49a3-8d12-7e47cc3fd2ea-config\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.373789 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31be1381-1937-49a3-8d12-7e47cc3fd2ea-serving-cert\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.385437 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z27ml\" (UniqueName: \"kubernetes.io/projected/31be1381-1937-49a3-8d12-7e47cc3fd2ea-kube-api-access-z27ml\") pod \"controller-manager-7b956f9588-twthx\" (UID: \"31be1381-1937-49a3-8d12-7e47cc3fd2ea\") " pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.486980 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:51 crc kubenswrapper[4881]: I0126 12:42:51.677245 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b956f9588-twthx"] Jan 26 12:42:52 crc kubenswrapper[4881]: I0126 12:42:52.099336 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c074245a-8ee8-43e5-a6ac-42865c01f8da" path="/var/lib/kubelet/pods/c074245a-8ee8-43e5-a6ac-42865c01f8da/volumes" Jan 26 12:42:52 crc kubenswrapper[4881]: I0126 12:42:52.153472 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" event={"ID":"31be1381-1937-49a3-8d12-7e47cc3fd2ea","Type":"ContainerStarted","Data":"069ed30cd4ba52c18a1600f7aa60b6f4c9b789ca8daf1365f6005998d1d9a80c"} Jan 26 12:42:52 crc kubenswrapper[4881]: I0126 12:42:52.153546 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" event={"ID":"31be1381-1937-49a3-8d12-7e47cc3fd2ea","Type":"ContainerStarted","Data":"ba7f626bf278641ff602cfd4e3666298cec86e5e5ff7cb03ae578477d73a180a"} Jan 26 12:42:52 crc kubenswrapper[4881]: I0126 12:42:52.153790 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:52 crc kubenswrapper[4881]: I0126 12:42:52.173961 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" Jan 26 12:42:52 crc kubenswrapper[4881]: I0126 12:42:52.180217 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b956f9588-twthx" podStartSLOduration=3.180196943 podStartE2EDuration="3.180196943s" podCreationTimestamp="2026-01-26 12:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:42:52.179619878 +0000 UTC m=+444.658929904" watchObservedRunningTime="2026-01-26 12:42:52.180196943 +0000 UTC m=+444.659506969" Jan 26 12:42:54 crc kubenswrapper[4881]: I0126 12:42:54.789270 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:42:54 crc kubenswrapper[4881]: I0126 12:42:54.789615 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:42:54 crc kubenswrapper[4881]: I0126 12:42:54.789662 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:42:54 crc kubenswrapper[4881]: I0126 12:42:54.790181 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c86f563b5fe2c0bb24899b86382a702f023fc4944ac1b02721e98303870ac011"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:42:54 crc kubenswrapper[4881]: I0126 12:42:54.790245 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://c86f563b5fe2c0bb24899b86382a702f023fc4944ac1b02721e98303870ac011" gracePeriod=600 Jan 26 12:42:55 crc kubenswrapper[4881]: I0126 12:42:55.191776 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="c86f563b5fe2c0bb24899b86382a702f023fc4944ac1b02721e98303870ac011" exitCode=0 Jan 26 12:42:55 crc kubenswrapper[4881]: I0126 12:42:55.191857 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"c86f563b5fe2c0bb24899b86382a702f023fc4944ac1b02721e98303870ac011"} Jan 26 12:42:55 crc kubenswrapper[4881]: I0126 12:42:55.192140 4881 scope.go:117] "RemoveContainer" containerID="4e51c1ddc00bfd73fde58f6b2e82757a924047ef05aa76b74d8fcb2de4156ad9" Jan 26 12:42:56 crc kubenswrapper[4881]: I0126 12:42:56.199460 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"fc2d017a964d1b170327be33eae2555dd4c7a4bb2af9e5ca4de2b2177d47849c"} Jan 26 12:44:20 crc kubenswrapper[4881]: I0126 12:44:20.053096 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-5db757fd5b-8bj6j"] Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.097103 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" podUID="58f4dede-962e-4db4-9548-05c36728f2f4" containerName="oauth-openshift" containerID="cri-o://69c029003097f1f463d69b4b3d2e2401d5f4f195fd420e0f3143f26799c81eb2" gracePeriod=15 Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.890504 4881 generic.go:334] "Generic (PLEG): container finished" podID="58f4dede-962e-4db4-9548-05c36728f2f4" containerID="69c029003097f1f463d69b4b3d2e2401d5f4f195fd420e0f3143f26799c81eb2" exitCode=0 Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.890567 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" event={"ID":"58f4dede-962e-4db4-9548-05c36728f2f4","Type":"ContainerDied","Data":"69c029003097f1f463d69b4b3d2e2401d5f4f195fd420e0f3143f26799c81eb2"} Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.930378 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.946513 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-provider-selection\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.946686 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-cliconfig\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.946744 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-service-ca\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.946780 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-ocp-branding-template\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.946807 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-session\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.946830 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-error\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.946867 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-idp-0-file-data\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.946936 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcf7x\" (UniqueName: \"kubernetes.io/projected/58f4dede-962e-4db4-9548-05c36728f2f4-kube-api-access-lcf7x\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.948259 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-login\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.948541 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-audit-policies\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.949414 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f4dede-962e-4db4-9548-05c36728f2f4-audit-dir\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.949596 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-router-certs\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.949621 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-trusted-ca-bundle\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.949661 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-serving-cert\") pod \"58f4dede-962e-4db4-9548-05c36728f2f4\" (UID: \"58f4dede-962e-4db4-9548-05c36728f2f4\") " Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.950904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.951435 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.951832 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58f4dede-962e-4db4-9548-05c36728f2f4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.952510 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.955183 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.957588 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f4dede-962e-4db4-9548-05c36728f2f4-kube-api-access-lcf7x" (OuterVolumeSpecName: "kube-api-access-lcf7x") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "kube-api-access-lcf7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.958182 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.960503 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.962529 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-c8d665689-mft2v"] Jan 26 12:44:45 crc kubenswrapper[4881]: E0126 12:44:45.962759 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f4dede-962e-4db4-9548-05c36728f2f4" containerName="oauth-openshift" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.962771 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f4dede-962e-4db4-9548-05c36728f2f4" containerName="oauth-openshift" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.962873 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f4dede-962e-4db4-9548-05c36728f2f4" containerName="oauth-openshift" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.963320 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.964360 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.964542 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.964627 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.967112 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.973696 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c8d665689-mft2v"] Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.973891 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:45 crc kubenswrapper[4881]: I0126 12:44:45.974771 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "58f4dede-962e-4db4-9548-05c36728f2f4" (UID: "58f4dede-962e-4db4-9548-05c36728f2f4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.051734 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052158 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-audit-policies\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052182 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhvj\" (UniqueName: \"kubernetes.io/projected/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-kube-api-access-9mhvj\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052208 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052228 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-template-login\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052252 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052271 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-template-error\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052467 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052661 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-audit-dir\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052716 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052740 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-session\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052812 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052899 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052923 4881 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052936 4881 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f4dede-962e-4db4-9548-05c36728f2f4-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052948 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052959 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052972 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052984 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.052998 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.053012 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.053024 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.053037 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.053054 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.053069 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58f4dede-962e-4db4-9548-05c36728f2f4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.053083 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcf7x\" (UniqueName: \"kubernetes.io/projected/58f4dede-962e-4db4-9548-05c36728f2f4-kube-api-access-lcf7x\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154091 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhvj\" (UniqueName: \"kubernetes.io/projected/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-kube-api-access-9mhvj\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154261 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154323 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-template-login\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154360 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154408 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154433 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154479 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-template-error\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154505 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154655 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-audit-dir\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154680 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154731 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-session\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154821 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.154850 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-audit-policies\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.156368 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-audit-dir\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.156446 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.156453 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-audit-policies\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.157963 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.160060 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.160268 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.160606 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-template-error\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.160780 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.161182 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.161429 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.161764 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-session\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.162461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.163035 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-v4-0-config-user-template-login\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.171764 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhvj\" (UniqueName: \"kubernetes.io/projected/fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6-kube-api-access-9mhvj\") pod \"oauth-openshift-c8d665689-mft2v\" (UID: \"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6\") " pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.325264 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.785164 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c8d665689-mft2v"] Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.904346 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" event={"ID":"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6","Type":"ContainerStarted","Data":"aabfe5fd2a2595537f17f35c76ad7d74002dd3ceba40ecd30108aa646c99f160"} Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.906825 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" event={"ID":"58f4dede-962e-4db4-9548-05c36728f2f4","Type":"ContainerDied","Data":"c4adab774eb529e54a6784bd5700ef472270a43229da309bd4c5a73419de4c9f"} Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.906863 4881 scope.go:117] "RemoveContainer" containerID="69c029003097f1f463d69b4b3d2e2401d5f4f195fd420e0f3143f26799c81eb2" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.906925 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5db757fd5b-8bj6j" Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.927408 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-5db757fd5b-8bj6j"] Jan 26 12:44:46 crc kubenswrapper[4881]: I0126 12:44:46.932046 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-5db757fd5b-8bj6j"] Jan 26 12:44:47 crc kubenswrapper[4881]: I0126 12:44:47.912848 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" event={"ID":"fefcfa7b-fb1b-4c87-85bd-0f03cf6530f6","Type":"ContainerStarted","Data":"1ceaf96a52e7b12b27fd4b4d90653fb99404b7f32facf57c928e1a885bd07c8b"} Jan 26 12:44:47 crc kubenswrapper[4881]: I0126 12:44:47.913047 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:48 crc kubenswrapper[4881]: I0126 12:44:48.088889 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58f4dede-962e-4db4-9548-05c36728f2f4" path="/var/lib/kubelet/pods/58f4dede-962e-4db4-9548-05c36728f2f4/volumes" Jan 26 12:44:48 crc kubenswrapper[4881]: I0126 12:44:48.287945 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" Jan 26 12:44:48 crc kubenswrapper[4881]: I0126 12:44:48.309707 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-c8d665689-mft2v" podStartSLOduration=28.309684823 podStartE2EDuration="28.309684823s" podCreationTimestamp="2026-01-26 12:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:44:47.938710938 +0000 UTC m=+560.418020974" watchObservedRunningTime="2026-01-26 12:44:48.309684823 +0000 UTC m=+560.788994859" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.169071 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq"] Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.170172 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.172392 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.172428 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.181397 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq"] Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.340150 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43796436-22b5-498f-8446-8f08d2f82305-config-volume\") pod \"collect-profiles-29490525-5chxq\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.340195 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7mrn\" (UniqueName: \"kubernetes.io/projected/43796436-22b5-498f-8446-8f08d2f82305-kube-api-access-s7mrn\") pod \"collect-profiles-29490525-5chxq\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.340227 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43796436-22b5-498f-8446-8f08d2f82305-secret-volume\") pod \"collect-profiles-29490525-5chxq\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.441956 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43796436-22b5-498f-8446-8f08d2f82305-config-volume\") pod \"collect-profiles-29490525-5chxq\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.442241 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7mrn\" (UniqueName: \"kubernetes.io/projected/43796436-22b5-498f-8446-8f08d2f82305-kube-api-access-s7mrn\") pod \"collect-profiles-29490525-5chxq\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.442359 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43796436-22b5-498f-8446-8f08d2f82305-secret-volume\") pod \"collect-profiles-29490525-5chxq\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.444163 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43796436-22b5-498f-8446-8f08d2f82305-config-volume\") pod \"collect-profiles-29490525-5chxq\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.449455 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43796436-22b5-498f-8446-8f08d2f82305-secret-volume\") pod \"collect-profiles-29490525-5chxq\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.458831 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7mrn\" (UniqueName: \"kubernetes.io/projected/43796436-22b5-498f-8446-8f08d2f82305-kube-api-access-s7mrn\") pod \"collect-profiles-29490525-5chxq\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.490619 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.660616 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq"] Jan 26 12:45:00 crc kubenswrapper[4881]: I0126 12:45:00.997685 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" event={"ID":"43796436-22b5-498f-8446-8f08d2f82305","Type":"ContainerStarted","Data":"396ad44341a835448a07ecf589a836a649fd6b49dadbb4eca7a52c75c4d32bed"} Jan 26 12:45:02 crc kubenswrapper[4881]: I0126 12:45:02.007149 4881 generic.go:334] "Generic (PLEG): container finished" podID="43796436-22b5-498f-8446-8f08d2f82305" containerID="bd28aa9b377d6898a1ca53e5a69b248ef971bf424eb0f043e01fe7d042d1192f" exitCode=0 Jan 26 12:45:02 crc kubenswrapper[4881]: I0126 12:45:02.007219 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" event={"ID":"43796436-22b5-498f-8446-8f08d2f82305","Type":"ContainerDied","Data":"bd28aa9b377d6898a1ca53e5a69b248ef971bf424eb0f043e01fe7d042d1192f"} Jan 26 12:45:03 crc kubenswrapper[4881]: I0126 12:45:03.321072 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:03 crc kubenswrapper[4881]: I0126 12:45:03.485061 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43796436-22b5-498f-8446-8f08d2f82305-config-volume\") pod \"43796436-22b5-498f-8446-8f08d2f82305\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " Jan 26 12:45:03 crc kubenswrapper[4881]: I0126 12:45:03.485175 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7mrn\" (UniqueName: \"kubernetes.io/projected/43796436-22b5-498f-8446-8f08d2f82305-kube-api-access-s7mrn\") pod \"43796436-22b5-498f-8446-8f08d2f82305\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " Jan 26 12:45:03 crc kubenswrapper[4881]: I0126 12:45:03.485233 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43796436-22b5-498f-8446-8f08d2f82305-secret-volume\") pod \"43796436-22b5-498f-8446-8f08d2f82305\" (UID: \"43796436-22b5-498f-8446-8f08d2f82305\") " Jan 26 12:45:03 crc kubenswrapper[4881]: I0126 12:45:03.486390 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43796436-22b5-498f-8446-8f08d2f82305-config-volume" (OuterVolumeSpecName: "config-volume") pod "43796436-22b5-498f-8446-8f08d2f82305" (UID: "43796436-22b5-498f-8446-8f08d2f82305"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:45:03 crc kubenswrapper[4881]: I0126 12:45:03.497131 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43796436-22b5-498f-8446-8f08d2f82305-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "43796436-22b5-498f-8446-8f08d2f82305" (UID: "43796436-22b5-498f-8446-8f08d2f82305"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:45:03 crc kubenswrapper[4881]: I0126 12:45:03.497842 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43796436-22b5-498f-8446-8f08d2f82305-kube-api-access-s7mrn" (OuterVolumeSpecName: "kube-api-access-s7mrn") pod "43796436-22b5-498f-8446-8f08d2f82305" (UID: "43796436-22b5-498f-8446-8f08d2f82305"). InnerVolumeSpecName "kube-api-access-s7mrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:45:03 crc kubenswrapper[4881]: I0126 12:45:03.586600 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43796436-22b5-498f-8446-8f08d2f82305-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:45:03 crc kubenswrapper[4881]: I0126 12:45:03.586656 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7mrn\" (UniqueName: \"kubernetes.io/projected/43796436-22b5-498f-8446-8f08d2f82305-kube-api-access-s7mrn\") on node \"crc\" DevicePath \"\"" Jan 26 12:45:03 crc kubenswrapper[4881]: I0126 12:45:03.587278 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43796436-22b5-498f-8446-8f08d2f82305-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:45:04 crc kubenswrapper[4881]: I0126 12:45:04.024911 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" event={"ID":"43796436-22b5-498f-8446-8f08d2f82305","Type":"ContainerDied","Data":"396ad44341a835448a07ecf589a836a649fd6b49dadbb4eca7a52c75c4d32bed"} Jan 26 12:45:04 crc kubenswrapper[4881]: I0126 12:45:04.024971 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq" Jan 26 12:45:04 crc kubenswrapper[4881]: I0126 12:45:04.025007 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="396ad44341a835448a07ecf589a836a649fd6b49dadbb4eca7a52c75c4d32bed" Jan 26 12:45:24 crc kubenswrapper[4881]: I0126 12:45:24.789508 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:45:24 crc kubenswrapper[4881]: I0126 12:45:24.790325 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:45:54 crc kubenswrapper[4881]: I0126 12:45:54.789512 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:45:54 crc kubenswrapper[4881]: I0126 12:45:54.790277 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:46:24 crc kubenswrapper[4881]: I0126 12:46:24.789892 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:46:24 crc kubenswrapper[4881]: I0126 12:46:24.790560 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:46:24 crc kubenswrapper[4881]: I0126 12:46:24.790633 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:46:24 crc kubenswrapper[4881]: I0126 12:46:24.791355 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc2d017a964d1b170327be33eae2555dd4c7a4bb2af9e5ca4de2b2177d47849c"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:46:24 crc kubenswrapper[4881]: I0126 12:46:24.791428 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://fc2d017a964d1b170327be33eae2555dd4c7a4bb2af9e5ca4de2b2177d47849c" gracePeriod=600 Jan 26 12:46:25 crc kubenswrapper[4881]: I0126 12:46:25.669065 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="fc2d017a964d1b170327be33eae2555dd4c7a4bb2af9e5ca4de2b2177d47849c" exitCode=0 Jan 26 12:46:25 crc kubenswrapper[4881]: I0126 12:46:25.669132 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"fc2d017a964d1b170327be33eae2555dd4c7a4bb2af9e5ca4de2b2177d47849c"} Jan 26 12:46:25 crc kubenswrapper[4881]: I0126 12:46:25.669395 4881 scope.go:117] "RemoveContainer" containerID="c86f563b5fe2c0bb24899b86382a702f023fc4944ac1b02721e98303870ac011" Jan 26 12:46:27 crc kubenswrapper[4881]: I0126 12:46:27.691888 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"658714d5f6b987a3a780334d9ffc01082e8c5ae88ec8118662e97cc52e9126d6"} Jan 26 12:48:10 crc kubenswrapper[4881]: I0126 12:48:10.287467 4881 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 12:48:54 crc kubenswrapper[4881]: I0126 12:48:54.790228 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:48:54 crc kubenswrapper[4881]: I0126 12:48:54.791303 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:49:24 crc kubenswrapper[4881]: I0126 12:49:24.789287 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:49:24 crc kubenswrapper[4881]: I0126 12:49:24.789755 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.143255 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hsq4v"] Jan 26 12:49:41 crc kubenswrapper[4881]: E0126 12:49:41.144185 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43796436-22b5-498f-8446-8f08d2f82305" containerName="collect-profiles" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.144208 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="43796436-22b5-498f-8446-8f08d2f82305" containerName="collect-profiles" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.144372 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="43796436-22b5-498f-8446-8f08d2f82305" containerName="collect-profiles" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.145680 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.158898 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hsq4v"] Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.329086 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjtqp\" (UniqueName: \"kubernetes.io/projected/ef736dec-7a3e-4255-8633-c4d380045cfd-kube-api-access-tjtqp\") pod \"certified-operators-hsq4v\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.329490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-utilities\") pod \"certified-operators-hsq4v\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.329545 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-catalog-content\") pod \"certified-operators-hsq4v\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.430478 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjtqp\" (UniqueName: \"kubernetes.io/projected/ef736dec-7a3e-4255-8633-c4d380045cfd-kube-api-access-tjtqp\") pod \"certified-operators-hsq4v\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.430603 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-utilities\") pod \"certified-operators-hsq4v\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.430638 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-catalog-content\") pod \"certified-operators-hsq4v\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.431093 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-catalog-content\") pod \"certified-operators-hsq4v\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.431129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-utilities\") pod \"certified-operators-hsq4v\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.458424 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjtqp\" (UniqueName: \"kubernetes.io/projected/ef736dec-7a3e-4255-8633-c4d380045cfd-kube-api-access-tjtqp\") pod \"certified-operators-hsq4v\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.463846 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.700136 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hsq4v"] Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.896921 4881 generic.go:334] "Generic (PLEG): container finished" podID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerID="af7ac99483cd154e6fd27f76748dd54c69de6b26ba0367538017f1285efcd550" exitCode=0 Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.896963 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsq4v" event={"ID":"ef736dec-7a3e-4255-8633-c4d380045cfd","Type":"ContainerDied","Data":"af7ac99483cd154e6fd27f76748dd54c69de6b26ba0367538017f1285efcd550"} Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.896988 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsq4v" event={"ID":"ef736dec-7a3e-4255-8633-c4d380045cfd","Type":"ContainerStarted","Data":"79df0209290f32639583c9cb3041d89bc072c2ad4fcc0eb328b10bfa7160b8ff"} Jan 26 12:49:41 crc kubenswrapper[4881]: I0126 12:49:41.898396 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 12:49:42 crc kubenswrapper[4881]: I0126 12:49:42.906967 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsq4v" event={"ID":"ef736dec-7a3e-4255-8633-c4d380045cfd","Type":"ContainerStarted","Data":"723f24e6d361515decc705550fc16728e220f5800332caf08be7bfa552ace868"} Jan 26 12:49:43 crc kubenswrapper[4881]: I0126 12:49:43.916993 4881 generic.go:334] "Generic (PLEG): container finished" podID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerID="723f24e6d361515decc705550fc16728e220f5800332caf08be7bfa552ace868" exitCode=0 Jan 26 12:49:43 crc kubenswrapper[4881]: I0126 12:49:43.917071 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsq4v" event={"ID":"ef736dec-7a3e-4255-8633-c4d380045cfd","Type":"ContainerDied","Data":"723f24e6d361515decc705550fc16728e220f5800332caf08be7bfa552ace868"} Jan 26 12:49:44 crc kubenswrapper[4881]: I0126 12:49:44.928478 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsq4v" event={"ID":"ef736dec-7a3e-4255-8633-c4d380045cfd","Type":"ContainerStarted","Data":"9d6ef35dbdfd710248af0e52162fb31e242f22c59993e967af7591e8206f50fb"} Jan 26 12:49:44 crc kubenswrapper[4881]: I0126 12:49:44.955567 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hsq4v" podStartSLOduration=1.521424531 podStartE2EDuration="3.955503896s" podCreationTimestamp="2026-01-26 12:49:41 +0000 UTC" firstStartedPulling="2026-01-26 12:49:41.898213383 +0000 UTC m=+854.377523409" lastFinishedPulling="2026-01-26 12:49:44.332292738 +0000 UTC m=+856.811602774" observedRunningTime="2026-01-26 12:49:44.953149919 +0000 UTC m=+857.432459985" watchObservedRunningTime="2026-01-26 12:49:44.955503896 +0000 UTC m=+857.434813962" Jan 26 12:49:51 crc kubenswrapper[4881]: I0126 12:49:51.464723 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:51 crc kubenswrapper[4881]: I0126 12:49:51.465096 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:51 crc kubenswrapper[4881]: I0126 12:49:51.500034 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:52 crc kubenswrapper[4881]: I0126 12:49:52.022410 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:52 crc kubenswrapper[4881]: I0126 12:49:52.060961 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hsq4v"] Jan 26 12:49:53 crc kubenswrapper[4881]: I0126 12:49:53.987678 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hsq4v" podUID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerName="registry-server" containerID="cri-o://9d6ef35dbdfd710248af0e52162fb31e242f22c59993e967af7591e8206f50fb" gracePeriod=2 Jan 26 12:49:54 crc kubenswrapper[4881]: I0126 12:49:54.789998 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:49:54 crc kubenswrapper[4881]: I0126 12:49:54.790096 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:49:54 crc kubenswrapper[4881]: I0126 12:49:54.790165 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:49:54 crc kubenswrapper[4881]: I0126 12:49:54.791073 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"658714d5f6b987a3a780334d9ffc01082e8c5ae88ec8118662e97cc52e9126d6"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:49:54 crc kubenswrapper[4881]: I0126 12:49:54.791178 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://658714d5f6b987a3a780334d9ffc01082e8c5ae88ec8118662e97cc52e9126d6" gracePeriod=600 Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:55.999700 4881 generic.go:334] "Generic (PLEG): container finished" podID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerID="9d6ef35dbdfd710248af0e52162fb31e242f22c59993e967af7591e8206f50fb" exitCode=0 Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:55.999864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsq4v" event={"ID":"ef736dec-7a3e-4255-8633-c4d380045cfd","Type":"ContainerDied","Data":"9d6ef35dbdfd710248af0e52162fb31e242f22c59993e967af7591e8206f50fb"} Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.002890 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="658714d5f6b987a3a780334d9ffc01082e8c5ae88ec8118662e97cc52e9126d6" exitCode=0 Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.002935 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"658714d5f6b987a3a780334d9ffc01082e8c5ae88ec8118662e97cc52e9126d6"} Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.002961 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"4632219e9eec673e5473a279c2fc2f5646ce521828ec90d160527916fe6cbc92"} Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.002978 4881 scope.go:117] "RemoveContainer" containerID="fc2d017a964d1b170327be33eae2555dd4c7a4bb2af9e5ca4de2b2177d47849c" Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.145124 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.241062 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-catalog-content\") pod \"ef736dec-7a3e-4255-8633-c4d380045cfd\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.241118 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-utilities\") pod \"ef736dec-7a3e-4255-8633-c4d380045cfd\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.241154 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjtqp\" (UniqueName: \"kubernetes.io/projected/ef736dec-7a3e-4255-8633-c4d380045cfd-kube-api-access-tjtqp\") pod \"ef736dec-7a3e-4255-8633-c4d380045cfd\" (UID: \"ef736dec-7a3e-4255-8633-c4d380045cfd\") " Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.242228 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-utilities" (OuterVolumeSpecName: "utilities") pod "ef736dec-7a3e-4255-8633-c4d380045cfd" (UID: "ef736dec-7a3e-4255-8633-c4d380045cfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.247076 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef736dec-7a3e-4255-8633-c4d380045cfd-kube-api-access-tjtqp" (OuterVolumeSpecName: "kube-api-access-tjtqp") pod "ef736dec-7a3e-4255-8633-c4d380045cfd" (UID: "ef736dec-7a3e-4255-8633-c4d380045cfd"). InnerVolumeSpecName "kube-api-access-tjtqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.294738 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef736dec-7a3e-4255-8633-c4d380045cfd" (UID: "ef736dec-7a3e-4255-8633-c4d380045cfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.343171 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.343205 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef736dec-7a3e-4255-8633-c4d380045cfd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:56 crc kubenswrapper[4881]: I0126 12:49:56.343217 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjtqp\" (UniqueName: \"kubernetes.io/projected/ef736dec-7a3e-4255-8633-c4d380045cfd-kube-api-access-tjtqp\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:57 crc kubenswrapper[4881]: I0126 12:49:57.016179 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsq4v" event={"ID":"ef736dec-7a3e-4255-8633-c4d380045cfd","Type":"ContainerDied","Data":"79df0209290f32639583c9cb3041d89bc072c2ad4fcc0eb328b10bfa7160b8ff"} Jan 26 12:49:57 crc kubenswrapper[4881]: I0126 12:49:57.016679 4881 scope.go:117] "RemoveContainer" containerID="9d6ef35dbdfd710248af0e52162fb31e242f22c59993e967af7591e8206f50fb" Jan 26 12:49:57 crc kubenswrapper[4881]: I0126 12:49:57.016254 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsq4v" Jan 26 12:49:57 crc kubenswrapper[4881]: I0126 12:49:57.037070 4881 scope.go:117] "RemoveContainer" containerID="723f24e6d361515decc705550fc16728e220f5800332caf08be7bfa552ace868" Jan 26 12:49:57 crc kubenswrapper[4881]: I0126 12:49:57.054964 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hsq4v"] Jan 26 12:49:57 crc kubenswrapper[4881]: I0126 12:49:57.058603 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hsq4v"] Jan 26 12:49:57 crc kubenswrapper[4881]: I0126 12:49:57.060569 4881 scope.go:117] "RemoveContainer" containerID="af7ac99483cd154e6fd27f76748dd54c69de6b26ba0367538017f1285efcd550" Jan 26 12:49:58 crc kubenswrapper[4881]: I0126 12:49:58.094239 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef736dec-7a3e-4255-8633-c4d380045cfd" path="/var/lib/kubelet/pods/ef736dec-7a3e-4255-8633-c4d380045cfd/volumes" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.550704 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q"] Jan 26 12:50:37 crc kubenswrapper[4881]: E0126 12:50:37.551294 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerName="registry-server" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.551305 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerName="registry-server" Jan 26 12:50:37 crc kubenswrapper[4881]: E0126 12:50:37.551321 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerName="extract-content" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.551329 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerName="extract-content" Jan 26 12:50:37 crc kubenswrapper[4881]: E0126 12:50:37.551340 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerName="extract-utilities" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.551346 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerName="extract-utilities" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.551441 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef736dec-7a3e-4255-8633-c4d380045cfd" containerName="registry-server" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.551796 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.554190 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.554329 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.554992 4881 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dh5pv" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.573295 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-hvlrt"] Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.573927 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-hvlrt" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.576293 4881 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-9xhhl" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.577791 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q"] Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.590473 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5bxtm"] Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.591330 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5bxtm" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.593179 4881 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tx4ch" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.593807 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-hvlrt"] Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.601767 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5bxtm"] Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.617230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9cpm\" (UniqueName: \"kubernetes.io/projected/e0a1688c-21a5-4443-9254-78b5b189c9fa-kube-api-access-n9cpm\") pod \"cert-manager-858654f9db-hvlrt\" (UID: \"e0a1688c-21a5-4443-9254-78b5b189c9fa\") " pod="cert-manager/cert-manager-858654f9db-hvlrt" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.617324 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lztvw\" (UniqueName: \"kubernetes.io/projected/4bfa393b-f144-4c15-81f7-b2c176f31b61-kube-api-access-lztvw\") pod \"cert-manager-webhook-687f57d79b-5bxtm\" (UID: \"4bfa393b-f144-4c15-81f7-b2c176f31b61\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5bxtm" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.617398 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nb54\" (UniqueName: \"kubernetes.io/projected/28bb7687-5041-4924-a064-a13442fc3766-kube-api-access-8nb54\") pod \"cert-manager-cainjector-cf98fcc89-gmh7q\" (UID: \"28bb7687-5041-4924-a064-a13442fc3766\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.718298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nb54\" (UniqueName: \"kubernetes.io/projected/28bb7687-5041-4924-a064-a13442fc3766-kube-api-access-8nb54\") pod \"cert-manager-cainjector-cf98fcc89-gmh7q\" (UID: \"28bb7687-5041-4924-a064-a13442fc3766\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.718405 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9cpm\" (UniqueName: \"kubernetes.io/projected/e0a1688c-21a5-4443-9254-78b5b189c9fa-kube-api-access-n9cpm\") pod \"cert-manager-858654f9db-hvlrt\" (UID: \"e0a1688c-21a5-4443-9254-78b5b189c9fa\") " pod="cert-manager/cert-manager-858654f9db-hvlrt" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.718455 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lztvw\" (UniqueName: \"kubernetes.io/projected/4bfa393b-f144-4c15-81f7-b2c176f31b61-kube-api-access-lztvw\") pod \"cert-manager-webhook-687f57d79b-5bxtm\" (UID: \"4bfa393b-f144-4c15-81f7-b2c176f31b61\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5bxtm" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.741338 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nb54\" (UniqueName: \"kubernetes.io/projected/28bb7687-5041-4924-a064-a13442fc3766-kube-api-access-8nb54\") pod \"cert-manager-cainjector-cf98fcc89-gmh7q\" (UID: \"28bb7687-5041-4924-a064-a13442fc3766\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.741373 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9cpm\" (UniqueName: \"kubernetes.io/projected/e0a1688c-21a5-4443-9254-78b5b189c9fa-kube-api-access-n9cpm\") pod \"cert-manager-858654f9db-hvlrt\" (UID: \"e0a1688c-21a5-4443-9254-78b5b189c9fa\") " pod="cert-manager/cert-manager-858654f9db-hvlrt" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.745299 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lztvw\" (UniqueName: \"kubernetes.io/projected/4bfa393b-f144-4c15-81f7-b2c176f31b61-kube-api-access-lztvw\") pod \"cert-manager-webhook-687f57d79b-5bxtm\" (UID: \"4bfa393b-f144-4c15-81f7-b2c176f31b61\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5bxtm" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.865135 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.894361 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-hvlrt" Jan 26 12:50:37 crc kubenswrapper[4881]: I0126 12:50:37.907106 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5bxtm" Jan 26 12:50:38 crc kubenswrapper[4881]: I0126 12:50:38.118830 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q"] Jan 26 12:50:38 crc kubenswrapper[4881]: I0126 12:50:38.163244 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-hvlrt"] Jan 26 12:50:38 crc kubenswrapper[4881]: I0126 12:50:38.191385 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5bxtm"] Jan 26 12:50:38 crc kubenswrapper[4881]: W0126 12:50:38.193326 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bfa393b_f144_4c15_81f7_b2c176f31b61.slice/crio-d2058e6a3e85734c3a94b4ffb9185411520bb72f22c692ff56b9800296855704 WatchSource:0}: Error finding container d2058e6a3e85734c3a94b4ffb9185411520bb72f22c692ff56b9800296855704: Status 404 returned error can't find the container with id d2058e6a3e85734c3a94b4ffb9185411520bb72f22c692ff56b9800296855704 Jan 26 12:50:38 crc kubenswrapper[4881]: I0126 12:50:38.285864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-hvlrt" event={"ID":"e0a1688c-21a5-4443-9254-78b5b189c9fa","Type":"ContainerStarted","Data":"998a3e6c79bcf85c048477331fa5a1ff6fc40952a6515fc64821c85e9af8fe30"} Jan 26 12:50:38 crc kubenswrapper[4881]: I0126 12:50:38.288207 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q" event={"ID":"28bb7687-5041-4924-a064-a13442fc3766","Type":"ContainerStarted","Data":"769b2af40e9ddab7e49d7fc486a370edcd1f97e20a471b1210fc44b57c973652"} Jan 26 12:50:38 crc kubenswrapper[4881]: I0126 12:50:38.290142 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5bxtm" event={"ID":"4bfa393b-f144-4c15-81f7-b2c176f31b61","Type":"ContainerStarted","Data":"d2058e6a3e85734c3a94b4ffb9185411520bb72f22c692ff56b9800296855704"} Jan 26 12:50:46 crc kubenswrapper[4881]: I0126 12:50:46.343389 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-hvlrt" event={"ID":"e0a1688c-21a5-4443-9254-78b5b189c9fa","Type":"ContainerStarted","Data":"9e05e9868966154202ee97e49e817e4add9a3c371b0b1ff7a4758bb945464da2"} Jan 26 12:50:46 crc kubenswrapper[4881]: I0126 12:50:46.345439 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q" event={"ID":"28bb7687-5041-4924-a064-a13442fc3766","Type":"ContainerStarted","Data":"efb9d9f0d593392e27be90054bcdabeb65f7e8129f0cdd4aa3b8764558b6e267"} Jan 26 12:50:46 crc kubenswrapper[4881]: I0126 12:50:46.358681 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-hvlrt" podStartSLOduration=1.597567277 podStartE2EDuration="9.358662891s" podCreationTimestamp="2026-01-26 12:50:37 +0000 UTC" firstStartedPulling="2026-01-26 12:50:38.169746707 +0000 UTC m=+910.649056733" lastFinishedPulling="2026-01-26 12:50:45.930842331 +0000 UTC m=+918.410152347" observedRunningTime="2026-01-26 12:50:46.35697278 +0000 UTC m=+918.836282816" watchObservedRunningTime="2026-01-26 12:50:46.358662891 +0000 UTC m=+918.837972917" Jan 26 12:50:46 crc kubenswrapper[4881]: I0126 12:50:46.408263 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-gmh7q" podStartSLOduration=1.674438238 podStartE2EDuration="9.408247245s" podCreationTimestamp="2026-01-26 12:50:37 +0000 UTC" firstStartedPulling="2026-01-26 12:50:38.127171316 +0000 UTC m=+910.606481342" lastFinishedPulling="2026-01-26 12:50:45.860980313 +0000 UTC m=+918.340290349" observedRunningTime="2026-01-26 12:50:46.368913303 +0000 UTC m=+918.848223349" watchObservedRunningTime="2026-01-26 12:50:46.408247245 +0000 UTC m=+918.887557271" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.355313 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5bxtm" event={"ID":"4bfa393b-f144-4c15-81f7-b2c176f31b61","Type":"ContainerStarted","Data":"ec8e7844be223f08e186a18149157c04c132f215205f6778fb62786dae2f0fcf"} Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.355770 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-5bxtm" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.388781 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-5bxtm" podStartSLOduration=1.417988907 podStartE2EDuration="10.388757449s" podCreationTimestamp="2026-01-26 12:50:37 +0000 UTC" firstStartedPulling="2026-01-26 12:50:38.19521006 +0000 UTC m=+910.674520086" lastFinishedPulling="2026-01-26 12:50:47.165978602 +0000 UTC m=+919.645288628" observedRunningTime="2026-01-26 12:50:47.374596003 +0000 UTC m=+919.853906119" watchObservedRunningTime="2026-01-26 12:50:47.388757449 +0000 UTC m=+919.868067485" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.619750 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kbjm9"] Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.624797 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovn-controller" containerID="cri-o://6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3" gracePeriod=30 Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.624806 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="sbdb" containerID="cri-o://c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5" gracePeriod=30 Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.624892 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5" gracePeriod=30 Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.624919 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="northd" containerID="cri-o://4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf" gracePeriod=30 Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.624945 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="kube-rbac-proxy-node" containerID="cri-o://525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355" gracePeriod=30 Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.624968 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovn-acl-logging" containerID="cri-o://87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2" gracePeriod=30 Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.624938 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="nbdb" containerID="cri-o://11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24" gracePeriod=30 Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.679673 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" containerID="cri-o://9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88" gracePeriod=30 Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.927567 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/3.log" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.930024 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovn-acl-logging/0.log" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.930552 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovn-controller/0.log" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.931500 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965339 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-systemd-units\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965404 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d272c950-9665-4b60-98a2-20c18d02d5a2-ovn-node-metrics-cert\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965441 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-env-overrides\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965467 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-node-log\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965490 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-script-lib\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965530 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-var-lib-openvswitch\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965550 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-slash\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965573 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-ovn\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965596 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-kubelet\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965626 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-netd\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965646 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-bin\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965680 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-log-socket\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965704 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-openvswitch\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965725 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-netns\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965748 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-systemd\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965766 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-etc-openvswitch\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965800 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-ovn-kubernetes\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965822 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crn6f\" (UniqueName: \"kubernetes.io/projected/d272c950-9665-4b60-98a2-20c18d02d5a2-kube-api-access-crn6f\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965843 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965870 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-config\") pod \"d272c950-9665-4b60-98a2-20c18d02d5a2\" (UID: \"d272c950-9665-4b60-98a2-20c18d02d5a2\") " Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965512 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966267 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.965585 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-node-log" (OuterVolumeSpecName: "node-log") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966173 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966218 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966327 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966365 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966392 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-slash" (OuterVolumeSpecName: "host-slash") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966381 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966442 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966461 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966457 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966480 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966478 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966501 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-log-socket" (OuterVolumeSpecName: "log-socket") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966497 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.966570 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.974881 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d272c950-9665-4b60-98a2-20c18d02d5a2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.976932 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d272c950-9665-4b60-98a2-20c18d02d5a2-kube-api-access-crn6f" (OuterVolumeSpecName: "kube-api-access-crn6f") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "kube-api-access-crn6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:50:47 crc kubenswrapper[4881]: I0126 12:50:47.980083 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d272c950-9665-4b60-98a2-20c18d02d5a2" (UID: "d272c950-9665-4b60-98a2-20c18d02d5a2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.004974 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mld5t"] Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005289 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="kubecfg-setup" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005320 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="kubecfg-setup" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005336 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="sbdb" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005349 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="sbdb" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005368 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="kube-rbac-proxy-node" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005382 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="kube-rbac-proxy-node" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005397 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="nbdb" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005408 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="nbdb" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005433 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="northd" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005446 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="northd" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005465 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovn-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005477 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovn-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005500 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005536 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005555 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005567 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005582 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005595 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005607 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005620 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005636 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovn-acl-logging" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005649 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovn-acl-logging" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.005666 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005678 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005834 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005854 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="kube-rbac-proxy-node" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005872 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005888 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005907 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovn-acl-logging" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005921 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="nbdb" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005935 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005951 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="sbdb" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005965 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovn-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.005980 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="northd" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.006148 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.006162 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.006327 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.006354 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerName="ovnkube-controller" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.009715 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067295 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067340 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-run-openvswitch\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067363 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-slash\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067378 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-ovnkube-config\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067394 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-cni-netd\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067424 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-ovn-node-metrics-cert\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067456 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-kubelet\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067508 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-etc-openvswitch\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067681 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-node-log\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067737 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-run-systemd\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067792 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-ovnkube-script-lib\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067813 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-run-ovn\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067828 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-run-netns\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067840 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-log-socket\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067871 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g2z8\" (UniqueName: \"kubernetes.io/projected/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-kube-api-access-2g2z8\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067887 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-var-lib-openvswitch\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.067990 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-run-ovn-kubernetes\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068051 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-env-overrides\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068087 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-cni-bin\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068107 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-systemd-units\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068148 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068159 4881 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068168 4881 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068176 4881 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068184 4881 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068193 4881 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068200 4881 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068208 4881 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068217 4881 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068225 4881 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068234 4881 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068243 4881 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068252 4881 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068273 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crn6f\" (UniqueName: \"kubernetes.io/projected/d272c950-9665-4b60-98a2-20c18d02d5a2-kube-api-access-crn6f\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068282 4881 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068290 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068301 4881 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068310 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d272c950-9665-4b60-98a2-20c18d02d5a2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068318 4881 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d272c950-9665-4b60-98a2-20c18d02d5a2-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.068325 4881 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d272c950-9665-4b60-98a2-20c18d02d5a2-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.168922 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169025 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-run-openvswitch\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169049 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-slash\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169068 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-ovnkube-config\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169090 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-cni-netd\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169131 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-ovn-node-metrics-cert\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169179 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-kubelet\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169200 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-etc-openvswitch\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169219 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-node-log\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-run-systemd\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169256 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-cni-netd\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-ovnkube-script-lib\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169367 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-run-ovn\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169406 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-run-netns\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169438 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-log-socket\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169559 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g2z8\" (UniqueName: \"kubernetes.io/projected/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-kube-api-access-2g2z8\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169597 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-var-lib-openvswitch\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169626 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-run-ovn-kubernetes\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169658 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-env-overrides\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169693 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-cni-bin\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169724 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-systemd-units\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.169875 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-systemd-units\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170081 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-ovnkube-script-lib\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170151 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-kubelet\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-run-systemd\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170251 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-node-log\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170367 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-run-netns\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170369 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-etc-openvswitch\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170427 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-run-ovn\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-run-openvswitch\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170452 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-var-lib-openvswitch\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170576 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-slash\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170594 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-log-socket\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170661 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-run-ovn-kubernetes\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.170711 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-cni-bin\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.171305 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-ovnkube-config\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.171357 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.171702 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-env-overrides\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.176999 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-ovn-node-metrics-cert\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.203687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g2z8\" (UniqueName: \"kubernetes.io/projected/cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3-kube-api-access-2g2z8\") pod \"ovnkube-node-mld5t\" (UID: \"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.343494 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:48 crc kubenswrapper[4881]: W0126 12:50:48.363917 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8874b7_eb31_4d09_9dc1_a6c91c4e0bc3.slice/crio-99bddda4cc699ebb68d579bb15c8861bb0ad72f4d1dee0fa9081e2226d521f9e WatchSource:0}: Error finding container 99bddda4cc699ebb68d579bb15c8861bb0ad72f4d1dee0fa9081e2226d521f9e: Status 404 returned error can't find the container with id 99bddda4cc699ebb68d579bb15c8861bb0ad72f4d1dee0fa9081e2226d521f9e Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.365801 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csrkv_d24cc7d2-c2db-45ee-b405-fa56157f807c/kube-multus/2.log" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.366945 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csrkv_d24cc7d2-c2db-45ee-b405-fa56157f807c/kube-multus/1.log" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.367042 4881 generic.go:334] "Generic (PLEG): container finished" podID="d24cc7d2-c2db-45ee-b405-fa56157f807c" containerID="3ececde13ab7d1af9a740578861d9b8810a114b31668f5c683af712e19dfac3f" exitCode=2 Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.367127 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csrkv" event={"ID":"d24cc7d2-c2db-45ee-b405-fa56157f807c","Type":"ContainerDied","Data":"3ececde13ab7d1af9a740578861d9b8810a114b31668f5c683af712e19dfac3f"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.367184 4881 scope.go:117] "RemoveContainer" containerID="1cbe64dce8c7a8b2880354aac794adb5954b255c66ea597355f9b9b1ee476252" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.368151 4881 scope.go:117] "RemoveContainer" containerID="3ececde13ab7d1af9a740578861d9b8810a114b31668f5c683af712e19dfac3f" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.372650 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovnkube-controller/3.log" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.383567 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovn-acl-logging/0.log" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385245 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kbjm9_d272c950-9665-4b60-98a2-20c18d02d5a2/ovn-controller/0.log" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385608 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88" exitCode=0 Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385629 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5" exitCode=0 Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385638 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24" exitCode=0 Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385646 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf" exitCode=0 Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385653 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5" exitCode=0 Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385659 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355" exitCode=0 Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385664 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2" exitCode=143 Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385671 4881 generic.go:334] "Generic (PLEG): container finished" podID="d272c950-9665-4b60-98a2-20c18d02d5a2" containerID="6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3" exitCode=143 Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385750 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385748 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385908 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.385939 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386032 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386068 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386098 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386125 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386146 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386162 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386177 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386191 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386205 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386220 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386233 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386248 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386263 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386284 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386306 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386324 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386339 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386353 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386369 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386383 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386396 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386410 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386424 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386439 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386458 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386480 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386496 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386510 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386568 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386582 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386596 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386610 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386623 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386636 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386648 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386728 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kbjm9" event={"ID":"d272c950-9665-4b60-98a2-20c18d02d5a2","Type":"ContainerDied","Data":"6d18357b4d8f7dd19f67e03cfd005ebef61ac08d9898c2f87bd04958383c210d"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386753 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386770 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386788 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386802 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386816 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386830 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386844 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386857 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386871 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.386885 4881 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875"} Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.420034 4881 scope.go:117] "RemoveContainer" containerID="9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.426721 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kbjm9"] Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.432596 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kbjm9"] Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.457097 4881 scope.go:117] "RemoveContainer" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.490235 4881 scope.go:117] "RemoveContainer" containerID="c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.509458 4881 scope.go:117] "RemoveContainer" containerID="11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.531232 4881 scope.go:117] "RemoveContainer" containerID="4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.597321 4881 scope.go:117] "RemoveContainer" containerID="54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.613202 4881 scope.go:117] "RemoveContainer" containerID="525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.628244 4881 scope.go:117] "RemoveContainer" containerID="87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.643966 4881 scope.go:117] "RemoveContainer" containerID="6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.658721 4881 scope.go:117] "RemoveContainer" containerID="0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.672351 4881 scope.go:117] "RemoveContainer" containerID="9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.672670 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88\": container with ID starting with 9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88 not found: ID does not exist" containerID="9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.672695 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88"} err="failed to get container status \"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88\": rpc error: code = NotFound desc = could not find container \"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88\": container with ID starting with 9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.672729 4881 scope.go:117] "RemoveContainer" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.673248 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\": container with ID starting with 302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087 not found: ID does not exist" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.673319 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087"} err="failed to get container status \"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\": rpc error: code = NotFound desc = could not find container \"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\": container with ID starting with 302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.673367 4881 scope.go:117] "RemoveContainer" containerID="c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.673896 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\": container with ID starting with c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5 not found: ID does not exist" containerID="c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.673916 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5"} err="failed to get container status \"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\": rpc error: code = NotFound desc = could not find container \"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\": container with ID starting with c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.673929 4881 scope.go:117] "RemoveContainer" containerID="11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.674299 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\": container with ID starting with 11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24 not found: ID does not exist" containerID="11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.674346 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24"} err="failed to get container status \"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\": rpc error: code = NotFound desc = could not find container \"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\": container with ID starting with 11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.674375 4881 scope.go:117] "RemoveContainer" containerID="4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.674760 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\": container with ID starting with 4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf not found: ID does not exist" containerID="4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.674782 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf"} err="failed to get container status \"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\": rpc error: code = NotFound desc = could not find container \"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\": container with ID starting with 4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.674795 4881 scope.go:117] "RemoveContainer" containerID="54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.675664 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\": container with ID starting with 54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5 not found: ID does not exist" containerID="54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.675716 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5"} err="failed to get container status \"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\": rpc error: code = NotFound desc = could not find container \"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\": container with ID starting with 54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.675748 4881 scope.go:117] "RemoveContainer" containerID="525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.676080 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\": container with ID starting with 525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355 not found: ID does not exist" containerID="525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.676101 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355"} err="failed to get container status \"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\": rpc error: code = NotFound desc = could not find container \"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\": container with ID starting with 525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.676115 4881 scope.go:117] "RemoveContainer" containerID="87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.676546 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\": container with ID starting with 87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2 not found: ID does not exist" containerID="87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.676596 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2"} err="failed to get container status \"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\": rpc error: code = NotFound desc = could not find container \"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\": container with ID starting with 87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.676626 4881 scope.go:117] "RemoveContainer" containerID="6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.676922 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\": container with ID starting with 6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3 not found: ID does not exist" containerID="6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.676940 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3"} err="failed to get container status \"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\": rpc error: code = NotFound desc = could not find container \"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\": container with ID starting with 6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.676953 4881 scope.go:117] "RemoveContainer" containerID="0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875" Jan 26 12:50:48 crc kubenswrapper[4881]: E0126 12:50:48.677296 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\": container with ID starting with 0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875 not found: ID does not exist" containerID="0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.677316 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875"} err="failed to get container status \"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\": rpc error: code = NotFound desc = could not find container \"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\": container with ID starting with 0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.677329 4881 scope.go:117] "RemoveContainer" containerID="9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.677672 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88"} err="failed to get container status \"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88\": rpc error: code = NotFound desc = could not find container \"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88\": container with ID starting with 9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.677704 4881 scope.go:117] "RemoveContainer" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.677964 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087"} err="failed to get container status \"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\": rpc error: code = NotFound desc = could not find container \"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\": container with ID starting with 302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.677984 4881 scope.go:117] "RemoveContainer" containerID="c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.678271 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5"} err="failed to get container status \"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\": rpc error: code = NotFound desc = could not find container \"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\": container with ID starting with c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.678324 4881 scope.go:117] "RemoveContainer" containerID="11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.678767 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24"} err="failed to get container status \"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\": rpc error: code = NotFound desc = could not find container \"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\": container with ID starting with 11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.678801 4881 scope.go:117] "RemoveContainer" containerID="4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.679047 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf"} err="failed to get container status \"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\": rpc error: code = NotFound desc = could not find container \"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\": container with ID starting with 4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.679103 4881 scope.go:117] "RemoveContainer" containerID="54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.679373 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5"} err="failed to get container status \"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\": rpc error: code = NotFound desc = could not find container \"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\": container with ID starting with 54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.679401 4881 scope.go:117] "RemoveContainer" containerID="525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.679752 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355"} err="failed to get container status \"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\": rpc error: code = NotFound desc = could not find container \"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\": container with ID starting with 525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.679770 4881 scope.go:117] "RemoveContainer" containerID="87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.680048 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2"} err="failed to get container status \"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\": rpc error: code = NotFound desc = could not find container \"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\": container with ID starting with 87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.680076 4881 scope.go:117] "RemoveContainer" containerID="6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.680315 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3"} err="failed to get container status \"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\": rpc error: code = NotFound desc = could not find container \"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\": container with ID starting with 6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.680356 4881 scope.go:117] "RemoveContainer" containerID="0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.680742 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875"} err="failed to get container status \"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\": rpc error: code = NotFound desc = could not find container \"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\": container with ID starting with 0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.680761 4881 scope.go:117] "RemoveContainer" containerID="9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.681036 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88"} err="failed to get container status \"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88\": rpc error: code = NotFound desc = could not find container \"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88\": container with ID starting with 9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.681052 4881 scope.go:117] "RemoveContainer" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.681314 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087"} err="failed to get container status \"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\": rpc error: code = NotFound desc = could not find container \"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\": container with ID starting with 302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.681331 4881 scope.go:117] "RemoveContainer" containerID="c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.681593 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5"} err="failed to get container status \"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\": rpc error: code = NotFound desc = could not find container \"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\": container with ID starting with c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.681621 4881 scope.go:117] "RemoveContainer" containerID="11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.681847 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24"} err="failed to get container status \"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\": rpc error: code = NotFound desc = could not find container \"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\": container with ID starting with 11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.681867 4881 scope.go:117] "RemoveContainer" containerID="4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.682159 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf"} err="failed to get container status \"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\": rpc error: code = NotFound desc = could not find container \"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\": container with ID starting with 4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.682175 4881 scope.go:117] "RemoveContainer" containerID="54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.682406 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5"} err="failed to get container status \"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\": rpc error: code = NotFound desc = could not find container \"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\": container with ID starting with 54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.682436 4881 scope.go:117] "RemoveContainer" containerID="525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.682741 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355"} err="failed to get container status \"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\": rpc error: code = NotFound desc = could not find container \"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\": container with ID starting with 525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.682763 4881 scope.go:117] "RemoveContainer" containerID="87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.682986 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2"} err="failed to get container status \"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\": rpc error: code = NotFound desc = could not find container \"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\": container with ID starting with 87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.683005 4881 scope.go:117] "RemoveContainer" containerID="6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.683336 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3"} err="failed to get container status \"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\": rpc error: code = NotFound desc = could not find container \"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\": container with ID starting with 6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.683363 4881 scope.go:117] "RemoveContainer" containerID="0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.683621 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875"} err="failed to get container status \"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\": rpc error: code = NotFound desc = could not find container \"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\": container with ID starting with 0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.683666 4881 scope.go:117] "RemoveContainer" containerID="9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.684415 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88"} err="failed to get container status \"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88\": rpc error: code = NotFound desc = could not find container \"9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88\": container with ID starting with 9289f146ecd788b045ebeedd944eb6dd144ed67c799dfb987793f605b45aeb88 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.684441 4881 scope.go:117] "RemoveContainer" containerID="302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.684753 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087"} err="failed to get container status \"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\": rpc error: code = NotFound desc = could not find container \"302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087\": container with ID starting with 302110f5451ad1f0e6c86a66d215ed879600e372bb9afcadbe2f2c4d5d091087 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.684796 4881 scope.go:117] "RemoveContainer" containerID="c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.685141 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5"} err="failed to get container status \"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\": rpc error: code = NotFound desc = could not find container \"c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5\": container with ID starting with c9aaca9d54edc83ad154b9cc876c0f2990f7949b4966e052314e947d0cbaf2f5 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.685173 4881 scope.go:117] "RemoveContainer" containerID="11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.685455 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24"} err="failed to get container status \"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\": rpc error: code = NotFound desc = could not find container \"11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24\": container with ID starting with 11ae8e0538c02d437ba2c7a5f53004fb5c5b7bf6c232f75b6bd737d2ec4f6a24 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.685481 4881 scope.go:117] "RemoveContainer" containerID="4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.685753 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf"} err="failed to get container status \"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\": rpc error: code = NotFound desc = could not find container \"4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf\": container with ID starting with 4526fe39791d5692af1a573249bcc2f73f2ede55766ea580867dbb009d247ddf not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.685788 4881 scope.go:117] "RemoveContainer" containerID="54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.686108 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5"} err="failed to get container status \"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\": rpc error: code = NotFound desc = could not find container \"54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5\": container with ID starting with 54beaa25a2174c9f18bf5be7fba4c314e374e2ea05eaee1c5b62dfd7351138f5 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.686145 4881 scope.go:117] "RemoveContainer" containerID="525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.686438 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355"} err="failed to get container status \"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\": rpc error: code = NotFound desc = could not find container \"525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355\": container with ID starting with 525d6dcbf02a6ff1670c1cfff7ea9769a3e6e99d4cbc9f67b4ab9a4838f85355 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.686470 4881 scope.go:117] "RemoveContainer" containerID="87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.686745 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2"} err="failed to get container status \"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\": rpc error: code = NotFound desc = could not find container \"87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2\": container with ID starting with 87e87c69a0fa95e071cd3453794764480d9ed943e2d6dbe7d08e03e711be8de2 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.686771 4881 scope.go:117] "RemoveContainer" containerID="6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.687011 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3"} err="failed to get container status \"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\": rpc error: code = NotFound desc = could not find container \"6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3\": container with ID starting with 6fd4c549d12c308cae9727fdb35eec136e95b94990bfca2e3c2baf3cf5724ae3 not found: ID does not exist" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.687031 4881 scope.go:117] "RemoveContainer" containerID="0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875" Jan 26 12:50:48 crc kubenswrapper[4881]: I0126 12:50:48.687364 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875"} err="failed to get container status \"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\": rpc error: code = NotFound desc = could not find container \"0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875\": container with ID starting with 0995c1855fbad9dc3799727d7456b17ef9cd900ed1f4241bde85718099ede875 not found: ID does not exist" Jan 26 12:50:49 crc kubenswrapper[4881]: I0126 12:50:49.391238 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csrkv_d24cc7d2-c2db-45ee-b405-fa56157f807c/kube-multus/2.log" Jan 26 12:50:49 crc kubenswrapper[4881]: I0126 12:50:49.391323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csrkv" event={"ID":"d24cc7d2-c2db-45ee-b405-fa56157f807c","Type":"ContainerStarted","Data":"a3d7d3fb5896e87d01e87e132977bc09a2302377c8858fd54dbde97b78149123"} Jan 26 12:50:49 crc kubenswrapper[4881]: I0126 12:50:49.394743 4881 generic.go:334] "Generic (PLEG): container finished" podID="cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3" containerID="c23f5dbdcf7490c539381e03cc22922aa0ea1bf2cf692100f0e8c603c6130ee7" exitCode=0 Jan 26 12:50:49 crc kubenswrapper[4881]: I0126 12:50:49.394772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" event={"ID":"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3","Type":"ContainerDied","Data":"c23f5dbdcf7490c539381e03cc22922aa0ea1bf2cf692100f0e8c603c6130ee7"} Jan 26 12:50:49 crc kubenswrapper[4881]: I0126 12:50:49.394789 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" event={"ID":"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3","Type":"ContainerStarted","Data":"99bddda4cc699ebb68d579bb15c8861bb0ad72f4d1dee0fa9081e2226d521f9e"} Jan 26 12:50:50 crc kubenswrapper[4881]: I0126 12:50:50.088966 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d272c950-9665-4b60-98a2-20c18d02d5a2" path="/var/lib/kubelet/pods/d272c950-9665-4b60-98a2-20c18d02d5a2/volumes" Jan 26 12:50:50 crc kubenswrapper[4881]: I0126 12:50:50.413700 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" event={"ID":"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3","Type":"ContainerStarted","Data":"e5501355bc4eba935a217990b33aea9f626ac6487d563b2f62fffdac1fb9270b"} Jan 26 12:50:50 crc kubenswrapper[4881]: I0126 12:50:50.413759 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" event={"ID":"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3","Type":"ContainerStarted","Data":"806b2646df27fa88746719f6664bf970575d3c4ac866fe1df00c0349bbd5d545"} Jan 26 12:50:50 crc kubenswrapper[4881]: I0126 12:50:50.413782 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" event={"ID":"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3","Type":"ContainerStarted","Data":"ca97a753e3dc7cd0a1cf71121923d0ff8ee5ce0bfcb1a9622f0be8ad019eab93"} Jan 26 12:50:50 crc kubenswrapper[4881]: I0126 12:50:50.413799 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" event={"ID":"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3","Type":"ContainerStarted","Data":"bc248900aaae337f5dab322547393edd08a78f464e20a0e2fca5733d523d60fb"} Jan 26 12:50:50 crc kubenswrapper[4881]: I0126 12:50:50.413819 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" event={"ID":"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3","Type":"ContainerStarted","Data":"b54487aced0d65a26c7ef5fc1b1a6a3444fc807bfd585041750006f13ac3332c"} Jan 26 12:50:50 crc kubenswrapper[4881]: I0126 12:50:50.413835 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" event={"ID":"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3","Type":"ContainerStarted","Data":"ca7f7776670afe819b710cde9d435110d1381b5391402a388cb396c9de41e5e1"} Jan 26 12:50:52 crc kubenswrapper[4881]: I0126 12:50:52.911591 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-5bxtm" Jan 26 12:50:53 crc kubenswrapper[4881]: I0126 12:50:53.484865 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" event={"ID":"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3","Type":"ContainerStarted","Data":"ee3b7510ae3d522d6c3f9325e3d34e9259454d6eed2fce244835addd2ff5b7c8"} Jan 26 12:50:58 crc kubenswrapper[4881]: I0126 12:50:58.520875 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" event={"ID":"cd8874b7-eb31-4d09-9dc1-a6c91c4e0bc3","Type":"ContainerStarted","Data":"1f7ff782a4da167caa81f80e1a7251f9275a125ae3c6019912c6f19a437b4afb"} Jan 26 12:50:58 crc kubenswrapper[4881]: I0126 12:50:58.521615 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:58 crc kubenswrapper[4881]: I0126 12:50:58.521632 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:58 crc kubenswrapper[4881]: I0126 12:50:58.564985 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:58 crc kubenswrapper[4881]: I0126 12:50:58.591404 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" podStartSLOduration=11.591388401 podStartE2EDuration="11.591388401s" podCreationTimestamp="2026-01-26 12:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:50:58.554901039 +0000 UTC m=+931.034211065" watchObservedRunningTime="2026-01-26 12:50:58.591388401 +0000 UTC m=+931.070698427" Jan 26 12:50:59 crc kubenswrapper[4881]: I0126 12:50:59.525220 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:50:59 crc kubenswrapper[4881]: I0126 12:50:59.549040 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.512949 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pl5b8"] Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.515237 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.524071 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pl5b8"] Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.610313 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-catalog-content\") pod \"community-operators-pl5b8\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.610380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-utilities\") pod \"community-operators-pl5b8\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.610405 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgwx\" (UniqueName: \"kubernetes.io/projected/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-kube-api-access-7kgwx\") pod \"community-operators-pl5b8\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.711298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-catalog-content\") pod \"community-operators-pl5b8\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.711349 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-utilities\") pod \"community-operators-pl5b8\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.711374 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgwx\" (UniqueName: \"kubernetes.io/projected/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-kube-api-access-7kgwx\") pod \"community-operators-pl5b8\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.711780 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-catalog-content\") pod \"community-operators-pl5b8\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.711938 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-utilities\") pod \"community-operators-pl5b8\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.728853 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgwx\" (UniqueName: \"kubernetes.io/projected/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-kube-api-access-7kgwx\") pod \"community-operators-pl5b8\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:16 crc kubenswrapper[4881]: I0126 12:51:16.881433 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:17 crc kubenswrapper[4881]: I0126 12:51:17.123997 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pl5b8"] Jan 26 12:51:17 crc kubenswrapper[4881]: I0126 12:51:17.647453 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerID="a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23" exitCode=0 Jan 26 12:51:17 crc kubenswrapper[4881]: I0126 12:51:17.647551 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl5b8" event={"ID":"3d7c048b-0533-47b7-a19e-5b61dc0d92ff","Type":"ContainerDied","Data":"a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23"} Jan 26 12:51:17 crc kubenswrapper[4881]: I0126 12:51:17.647944 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl5b8" event={"ID":"3d7c048b-0533-47b7-a19e-5b61dc0d92ff","Type":"ContainerStarted","Data":"6232944879a380c09f216116bcf421aafcc8bb0ce93e8457f2853b4341f75daa"} Jan 26 12:51:18 crc kubenswrapper[4881]: I0126 12:51:18.381853 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mld5t" Jan 26 12:51:18 crc kubenswrapper[4881]: I0126 12:51:18.654297 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl5b8" event={"ID":"3d7c048b-0533-47b7-a19e-5b61dc0d92ff","Type":"ContainerStarted","Data":"d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7"} Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.490413 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zbhjm"] Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.492876 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.503135 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zbhjm"] Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.652369 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-utilities\") pod \"redhat-marketplace-zbhjm\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.652433 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slpnp\" (UniqueName: \"kubernetes.io/projected/dd64b8a7-af19-47c5-bf02-a7fc0208db48-kube-api-access-slpnp\") pod \"redhat-marketplace-zbhjm\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.652460 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-catalog-content\") pod \"redhat-marketplace-zbhjm\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.662775 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerID="d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7" exitCode=0 Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.662822 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl5b8" event={"ID":"3d7c048b-0533-47b7-a19e-5b61dc0d92ff","Type":"ContainerDied","Data":"d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7"} Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.753446 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-utilities\") pod \"redhat-marketplace-zbhjm\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.753540 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slpnp\" (UniqueName: \"kubernetes.io/projected/dd64b8a7-af19-47c5-bf02-a7fc0208db48-kube-api-access-slpnp\") pod \"redhat-marketplace-zbhjm\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.753574 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-catalog-content\") pod \"redhat-marketplace-zbhjm\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.754041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-catalog-content\") pod \"redhat-marketplace-zbhjm\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.754334 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-utilities\") pod \"redhat-marketplace-zbhjm\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.780106 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slpnp\" (UniqueName: \"kubernetes.io/projected/dd64b8a7-af19-47c5-bf02-a7fc0208db48-kube-api-access-slpnp\") pod \"redhat-marketplace-zbhjm\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:19 crc kubenswrapper[4881]: I0126 12:51:19.823473 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:20 crc kubenswrapper[4881]: I0126 12:51:20.025527 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zbhjm"] Jan 26 12:51:20 crc kubenswrapper[4881]: I0126 12:51:20.682089 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl5b8" event={"ID":"3d7c048b-0533-47b7-a19e-5b61dc0d92ff","Type":"ContainerStarted","Data":"b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748"} Jan 26 12:51:20 crc kubenswrapper[4881]: I0126 12:51:20.685193 4881 generic.go:334] "Generic (PLEG): container finished" podID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerID="cfc38d7a2bcb734bb3677e807d4f6f2235ef09398e750c6fc5fdde49da0f069d" exitCode=0 Jan 26 12:51:20 crc kubenswrapper[4881]: I0126 12:51:20.685246 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbhjm" event={"ID":"dd64b8a7-af19-47c5-bf02-a7fc0208db48","Type":"ContainerDied","Data":"cfc38d7a2bcb734bb3677e807d4f6f2235ef09398e750c6fc5fdde49da0f069d"} Jan 26 12:51:20 crc kubenswrapper[4881]: I0126 12:51:20.685306 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbhjm" event={"ID":"dd64b8a7-af19-47c5-bf02-a7fc0208db48","Type":"ContainerStarted","Data":"48b9b6e8094965923e24f52858aba5dd1ae007c1e8bde3e37fd7d04336a29031"} Jan 26 12:51:20 crc kubenswrapper[4881]: I0126 12:51:20.706951 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pl5b8" podStartSLOduration=2.226012853 podStartE2EDuration="4.706923863s" podCreationTimestamp="2026-01-26 12:51:16 +0000 UTC" firstStartedPulling="2026-01-26 12:51:17.649851216 +0000 UTC m=+950.129161282" lastFinishedPulling="2026-01-26 12:51:20.130762246 +0000 UTC m=+952.610072292" observedRunningTime="2026-01-26 12:51:20.703056959 +0000 UTC m=+953.182367005" watchObservedRunningTime="2026-01-26 12:51:20.706923863 +0000 UTC m=+953.186233929" Jan 26 12:51:21 crc kubenswrapper[4881]: I0126 12:51:21.695810 4881 generic.go:334] "Generic (PLEG): container finished" podID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerID="ebe68aa6cc33a1d4d956663dbef6389c78c5a4053c263c98296378ada7e8744e" exitCode=0 Jan 26 12:51:21 crc kubenswrapper[4881]: I0126 12:51:21.695938 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbhjm" event={"ID":"dd64b8a7-af19-47c5-bf02-a7fc0208db48","Type":"ContainerDied","Data":"ebe68aa6cc33a1d4d956663dbef6389c78c5a4053c263c98296378ada7e8744e"} Jan 26 12:51:22 crc kubenswrapper[4881]: I0126 12:51:22.704995 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbhjm" event={"ID":"dd64b8a7-af19-47c5-bf02-a7fc0208db48","Type":"ContainerStarted","Data":"690be947081b0158d180100b2611f9c4463977c7f5e279499a5c5dd054229e86"} Jan 26 12:51:22 crc kubenswrapper[4881]: I0126 12:51:22.728238 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zbhjm" podStartSLOduration=2.328535391 podStartE2EDuration="3.728215205s" podCreationTimestamp="2026-01-26 12:51:19 +0000 UTC" firstStartedPulling="2026-01-26 12:51:20.686634287 +0000 UTC m=+953.165944313" lastFinishedPulling="2026-01-26 12:51:22.086314091 +0000 UTC m=+954.565624127" observedRunningTime="2026-01-26 12:51:22.721379578 +0000 UTC m=+955.200689634" watchObservedRunningTime="2026-01-26 12:51:22.728215205 +0000 UTC m=+955.207525251" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.490430 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v8kkw"] Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.492754 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.519212 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8kkw"] Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.659649 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-utilities\") pod \"redhat-operators-v8kkw\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.659824 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n8zw\" (UniqueName: \"kubernetes.io/projected/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-kube-api-access-5n8zw\") pod \"redhat-operators-v8kkw\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.659870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-catalog-content\") pod \"redhat-operators-v8kkw\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.760823 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-catalog-content\") pod \"redhat-operators-v8kkw\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.760920 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-utilities\") pod \"redhat-operators-v8kkw\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.761029 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n8zw\" (UniqueName: \"kubernetes.io/projected/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-kube-api-access-5n8zw\") pod \"redhat-operators-v8kkw\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.761484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-catalog-content\") pod \"redhat-operators-v8kkw\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.761579 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-utilities\") pod \"redhat-operators-v8kkw\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.778873 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n8zw\" (UniqueName: \"kubernetes.io/projected/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-kube-api-access-5n8zw\") pod \"redhat-operators-v8kkw\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.870081 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.882162 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.882360 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:26 crc kubenswrapper[4881]: I0126 12:51:26.925916 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:27 crc kubenswrapper[4881]: I0126 12:51:27.285903 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8kkw"] Jan 26 12:51:27 crc kubenswrapper[4881]: I0126 12:51:27.737408 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8kkw" event={"ID":"3ec115c7-7e39-4d8e-9845-9bffe652b5e3","Type":"ContainerStarted","Data":"1c1cb9ef38bb1ef32c9557b287ee158b08ede46cccb806f8505d6363d6d74c8f"} Jan 26 12:51:27 crc kubenswrapper[4881]: I0126 12:51:27.781223 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:28 crc kubenswrapper[4881]: I0126 12:51:28.746451 4881 generic.go:334] "Generic (PLEG): container finished" podID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerID="77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1" exitCode=0 Jan 26 12:51:28 crc kubenswrapper[4881]: I0126 12:51:28.746512 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8kkw" event={"ID":"3ec115c7-7e39-4d8e-9845-9bffe652b5e3","Type":"ContainerDied","Data":"77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1"} Jan 26 12:51:29 crc kubenswrapper[4881]: I0126 12:51:29.824183 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:29 crc kubenswrapper[4881]: I0126 12:51:29.824627 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:29 crc kubenswrapper[4881]: I0126 12:51:29.873336 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:30 crc kubenswrapper[4881]: I0126 12:51:30.482972 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pl5b8"] Jan 26 12:51:30 crc kubenswrapper[4881]: I0126 12:51:30.761903 4881 generic.go:334] "Generic (PLEG): container finished" podID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerID="e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f" exitCode=0 Jan 26 12:51:30 crc kubenswrapper[4881]: I0126 12:51:30.761991 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8kkw" event={"ID":"3ec115c7-7e39-4d8e-9845-9bffe652b5e3","Type":"ContainerDied","Data":"e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f"} Jan 26 12:51:30 crc kubenswrapper[4881]: I0126 12:51:30.762826 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pl5b8" podUID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerName="registry-server" containerID="cri-o://b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748" gracePeriod=2 Jan 26 12:51:30 crc kubenswrapper[4881]: I0126 12:51:30.822275 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.148071 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.327142 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kgwx\" (UniqueName: \"kubernetes.io/projected/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-kube-api-access-7kgwx\") pod \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.327707 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-utilities\") pod \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.327787 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-catalog-content\") pod \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\" (UID: \"3d7c048b-0533-47b7-a19e-5b61dc0d92ff\") " Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.334551 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-utilities" (OuterVolumeSpecName: "utilities") pod "3d7c048b-0533-47b7-a19e-5b61dc0d92ff" (UID: "3d7c048b-0533-47b7-a19e-5b61dc0d92ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.339147 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-kube-api-access-7kgwx" (OuterVolumeSpecName: "kube-api-access-7kgwx") pod "3d7c048b-0533-47b7-a19e-5b61dc0d92ff" (UID: "3d7c048b-0533-47b7-a19e-5b61dc0d92ff"). InnerVolumeSpecName "kube-api-access-7kgwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.404286 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d7c048b-0533-47b7-a19e-5b61dc0d92ff" (UID: "3d7c048b-0533-47b7-a19e-5b61dc0d92ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.429954 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kgwx\" (UniqueName: \"kubernetes.io/projected/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-kube-api-access-7kgwx\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.429996 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.430009 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7c048b-0533-47b7-a19e-5b61dc0d92ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.769979 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8kkw" event={"ID":"3ec115c7-7e39-4d8e-9845-9bffe652b5e3","Type":"ContainerStarted","Data":"47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c"} Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.772592 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerID="b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748" exitCode=0 Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.772655 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl5b8" event={"ID":"3d7c048b-0533-47b7-a19e-5b61dc0d92ff","Type":"ContainerDied","Data":"b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748"} Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.772692 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl5b8" event={"ID":"3d7c048b-0533-47b7-a19e-5b61dc0d92ff","Type":"ContainerDied","Data":"6232944879a380c09f216116bcf421aafcc8bb0ce93e8457f2853b4341f75daa"} Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.772725 4881 scope.go:117] "RemoveContainer" containerID="b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.772662 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl5b8" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.803258 4881 scope.go:117] "RemoveContainer" containerID="d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.805218 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v8kkw" podStartSLOduration=3.355313222 podStartE2EDuration="5.805192475s" podCreationTimestamp="2026-01-26 12:51:26 +0000 UTC" firstStartedPulling="2026-01-26 12:51:28.748130996 +0000 UTC m=+961.227441032" lastFinishedPulling="2026-01-26 12:51:31.198010209 +0000 UTC m=+963.677320285" observedRunningTime="2026-01-26 12:51:31.799044344 +0000 UTC m=+964.278354390" watchObservedRunningTime="2026-01-26 12:51:31.805192475 +0000 UTC m=+964.284502521" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.821606 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pl5b8"] Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.827929 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pl5b8"] Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.845689 4881 scope.go:117] "RemoveContainer" containerID="a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.865306 4881 scope.go:117] "RemoveContainer" containerID="b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748" Jan 26 12:51:31 crc kubenswrapper[4881]: E0126 12:51:31.865871 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748\": container with ID starting with b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748 not found: ID does not exist" containerID="b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.865911 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748"} err="failed to get container status \"b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748\": rpc error: code = NotFound desc = could not find container \"b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748\": container with ID starting with b596cdd1287fe7372f0241251db3cd871efec0ffa2fbe5a2513c7859e65be748 not found: ID does not exist" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.865938 4881 scope.go:117] "RemoveContainer" containerID="d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7" Jan 26 12:51:31 crc kubenswrapper[4881]: E0126 12:51:31.866302 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7\": container with ID starting with d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7 not found: ID does not exist" containerID="d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.866398 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7"} err="failed to get container status \"d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7\": rpc error: code = NotFound desc = could not find container \"d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7\": container with ID starting with d5d48c6979620a2e4e48ddea5327a1ad2314f6513009aec098acabc28308ede7 not found: ID does not exist" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.866431 4881 scope.go:117] "RemoveContainer" containerID="a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23" Jan 26 12:51:31 crc kubenswrapper[4881]: E0126 12:51:31.866780 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23\": container with ID starting with a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23 not found: ID does not exist" containerID="a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23" Jan 26 12:51:31 crc kubenswrapper[4881]: I0126 12:51:31.866811 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23"} err="failed to get container status \"a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23\": rpc error: code = NotFound desc = could not find container \"a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23\": container with ID starting with a6a1a83b8eb0f71ae784ef630e57a75389ff36adc4b35613a9e6e4e95620dd23 not found: ID does not exist" Jan 26 12:51:32 crc kubenswrapper[4881]: I0126 12:51:32.097140 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" path="/var/lib/kubelet/pods/3d7c048b-0533-47b7-a19e-5b61dc0d92ff/volumes" Jan 26 12:51:34 crc kubenswrapper[4881]: I0126 12:51:34.074110 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zbhjm"] Jan 26 12:51:34 crc kubenswrapper[4881]: I0126 12:51:34.074609 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zbhjm" podUID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerName="registry-server" containerID="cri-o://690be947081b0158d180100b2611f9c4463977c7f5e279499a5c5dd054229e86" gracePeriod=2 Jan 26 12:51:34 crc kubenswrapper[4881]: I0126 12:51:34.799357 4881 generic.go:334] "Generic (PLEG): container finished" podID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerID="690be947081b0158d180100b2611f9c4463977c7f5e279499a5c5dd054229e86" exitCode=0 Jan 26 12:51:34 crc kubenswrapper[4881]: I0126 12:51:34.799398 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbhjm" event={"ID":"dd64b8a7-af19-47c5-bf02-a7fc0208db48","Type":"ContainerDied","Data":"690be947081b0158d180100b2611f9c4463977c7f5e279499a5c5dd054229e86"} Jan 26 12:51:34 crc kubenswrapper[4881]: I0126 12:51:34.917932 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.093749 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slpnp\" (UniqueName: \"kubernetes.io/projected/dd64b8a7-af19-47c5-bf02-a7fc0208db48-kube-api-access-slpnp\") pod \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.094077 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-catalog-content\") pod \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.094126 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-utilities\") pod \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\" (UID: \"dd64b8a7-af19-47c5-bf02-a7fc0208db48\") " Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.095609 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-utilities" (OuterVolumeSpecName: "utilities") pod "dd64b8a7-af19-47c5-bf02-a7fc0208db48" (UID: "dd64b8a7-af19-47c5-bf02-a7fc0208db48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.099329 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd64b8a7-af19-47c5-bf02-a7fc0208db48-kube-api-access-slpnp" (OuterVolumeSpecName: "kube-api-access-slpnp") pod "dd64b8a7-af19-47c5-bf02-a7fc0208db48" (UID: "dd64b8a7-af19-47c5-bf02-a7fc0208db48"). InnerVolumeSpecName "kube-api-access-slpnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.119071 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd64b8a7-af19-47c5-bf02-a7fc0208db48" (UID: "dd64b8a7-af19-47c5-bf02-a7fc0208db48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.195272 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slpnp\" (UniqueName: \"kubernetes.io/projected/dd64b8a7-af19-47c5-bf02-a7fc0208db48-kube-api-access-slpnp\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.195310 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.195324 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd64b8a7-af19-47c5-bf02-a7fc0208db48-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.810281 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbhjm" event={"ID":"dd64b8a7-af19-47c5-bf02-a7fc0208db48","Type":"ContainerDied","Data":"48b9b6e8094965923e24f52858aba5dd1ae007c1e8bde3e37fd7d04336a29031"} Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.810371 4881 scope.go:117] "RemoveContainer" containerID="690be947081b0158d180100b2611f9c4463977c7f5e279499a5c5dd054229e86" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.810402 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zbhjm" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.841302 4881 scope.go:117] "RemoveContainer" containerID="ebe68aa6cc33a1d4d956663dbef6389c78c5a4053c263c98296378ada7e8744e" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.862272 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zbhjm"] Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.862414 4881 scope.go:117] "RemoveContainer" containerID="cfc38d7a2bcb734bb3677e807d4f6f2235ef09398e750c6fc5fdde49da0f069d" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.880296 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zbhjm"] Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.953477 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp"] Jan 26 12:51:35 crc kubenswrapper[4881]: E0126 12:51:35.953790 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerName="registry-server" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.953819 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerName="registry-server" Jan 26 12:51:35 crc kubenswrapper[4881]: E0126 12:51:35.953841 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerName="extract-utilities" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.953854 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerName="extract-utilities" Jan 26 12:51:35 crc kubenswrapper[4881]: E0126 12:51:35.953876 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerName="extract-utilities" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.953888 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerName="extract-utilities" Jan 26 12:51:35 crc kubenswrapper[4881]: E0126 12:51:35.953907 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerName="extract-content" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.953918 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerName="extract-content" Jan 26 12:51:35 crc kubenswrapper[4881]: E0126 12:51:35.953937 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerName="registry-server" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.953949 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerName="registry-server" Jan 26 12:51:35 crc kubenswrapper[4881]: E0126 12:51:35.953972 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerName="extract-content" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.953984 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerName="extract-content" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.954144 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d7c048b-0533-47b7-a19e-5b61dc0d92ff" containerName="registry-server" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.954177 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" containerName="registry-server" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.955490 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.957341 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 12:51:35 crc kubenswrapper[4881]: I0126 12:51:35.961141 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp"] Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.090024 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd64b8a7-af19-47c5-bf02-a7fc0208db48" path="/var/lib/kubelet/pods/dd64b8a7-af19-47c5-bf02-a7fc0208db48/volumes" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.108225 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.108323 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9swp\" (UniqueName: \"kubernetes.io/projected/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-kube-api-access-f9swp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.108370 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.209275 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9swp\" (UniqueName: \"kubernetes.io/projected/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-kube-api-access-f9swp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.209385 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.209479 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.210205 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.210353 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.236248 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9swp\" (UniqueName: \"kubernetes.io/projected/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-kube-api-access-f9swp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.276706 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.518658 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp"] Jan 26 12:51:36 crc kubenswrapper[4881]: W0126 12:51:36.526420 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ccdbf22_7f0b_489c_bf4c_22ce230c429a.slice/crio-db93553489feba4bf9f94bc71a5e55619b6d0ed06e4bc56b18ced2a336f669c8 WatchSource:0}: Error finding container db93553489feba4bf9f94bc71a5e55619b6d0ed06e4bc56b18ced2a336f669c8: Status 404 returned error can't find the container with id db93553489feba4bf9f94bc71a5e55619b6d0ed06e4bc56b18ced2a336f669c8 Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.820076 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" event={"ID":"5ccdbf22-7f0b-489c-bf4c-22ce230c429a","Type":"ContainerStarted","Data":"db93553489feba4bf9f94bc71a5e55619b6d0ed06e4bc56b18ced2a336f669c8"} Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.871044 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:36 crc kubenswrapper[4881]: I0126 12:51:36.871148 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:37 crc kubenswrapper[4881]: I0126 12:51:37.828975 4881 generic.go:334] "Generic (PLEG): container finished" podID="5ccdbf22-7f0b-489c-bf4c-22ce230c429a" containerID="2e5d4cdbfcc2196420cbc271d035fa5ce486a17ea9b3bc75aefa823ad90fb400" exitCode=0 Jan 26 12:51:37 crc kubenswrapper[4881]: I0126 12:51:37.829116 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" event={"ID":"5ccdbf22-7f0b-489c-bf4c-22ce230c429a","Type":"ContainerDied","Data":"2e5d4cdbfcc2196420cbc271d035fa5ce486a17ea9b3bc75aefa823ad90fb400"} Jan 26 12:51:37 crc kubenswrapper[4881]: I0126 12:51:37.963454 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v8kkw" podUID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerName="registry-server" probeResult="failure" output=< Jan 26 12:51:37 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 12:51:37 crc kubenswrapper[4881]: > Jan 26 12:51:39 crc kubenswrapper[4881]: I0126 12:51:39.843502 4881 generic.go:334] "Generic (PLEG): container finished" podID="5ccdbf22-7f0b-489c-bf4c-22ce230c429a" containerID="6d30cc6c21ee469ea1f5dc319763a2b5c98f2a7ede04cde2b79cce84f66ba49b" exitCode=0 Jan 26 12:51:39 crc kubenswrapper[4881]: I0126 12:51:39.843592 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" event={"ID":"5ccdbf22-7f0b-489c-bf4c-22ce230c429a","Type":"ContainerDied","Data":"6d30cc6c21ee469ea1f5dc319763a2b5c98f2a7ede04cde2b79cce84f66ba49b"} Jan 26 12:51:40 crc kubenswrapper[4881]: I0126 12:51:40.852781 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" event={"ID":"5ccdbf22-7f0b-489c-bf4c-22ce230c429a","Type":"ContainerStarted","Data":"45dfd664e9b4a5c0d031a85f89cf43982a6486c38de85f8f22f03411c837c484"} Jan 26 12:51:40 crc kubenswrapper[4881]: I0126 12:51:40.876893 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" podStartSLOduration=4.317167722 podStartE2EDuration="5.876860938s" podCreationTimestamp="2026-01-26 12:51:35 +0000 UTC" firstStartedPulling="2026-01-26 12:51:37.831215526 +0000 UTC m=+970.310525582" lastFinishedPulling="2026-01-26 12:51:39.390908762 +0000 UTC m=+971.870218798" observedRunningTime="2026-01-26 12:51:40.876752805 +0000 UTC m=+973.356062861" watchObservedRunningTime="2026-01-26 12:51:40.876860938 +0000 UTC m=+973.356170994" Jan 26 12:51:42 crc kubenswrapper[4881]: I0126 12:51:42.867926 4881 generic.go:334] "Generic (PLEG): container finished" podID="5ccdbf22-7f0b-489c-bf4c-22ce230c429a" containerID="45dfd664e9b4a5c0d031a85f89cf43982a6486c38de85f8f22f03411c837c484" exitCode=0 Jan 26 12:51:42 crc kubenswrapper[4881]: I0126 12:51:42.868308 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" event={"ID":"5ccdbf22-7f0b-489c-bf4c-22ce230c429a","Type":"ContainerDied","Data":"45dfd664e9b4a5c0d031a85f89cf43982a6486c38de85f8f22f03411c837c484"} Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.216744 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.358548 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-util\") pod \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.358744 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-bundle\") pod \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.358811 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9swp\" (UniqueName: \"kubernetes.io/projected/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-kube-api-access-f9swp\") pod \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\" (UID: \"5ccdbf22-7f0b-489c-bf4c-22ce230c429a\") " Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.362646 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-bundle" (OuterVolumeSpecName: "bundle") pod "5ccdbf22-7f0b-489c-bf4c-22ce230c429a" (UID: "5ccdbf22-7f0b-489c-bf4c-22ce230c429a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.365492 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-kube-api-access-f9swp" (OuterVolumeSpecName: "kube-api-access-f9swp") pod "5ccdbf22-7f0b-489c-bf4c-22ce230c429a" (UID: "5ccdbf22-7f0b-489c-bf4c-22ce230c429a"). InnerVolumeSpecName "kube-api-access-f9swp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.375958 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-util" (OuterVolumeSpecName: "util") pod "5ccdbf22-7f0b-489c-bf4c-22ce230c429a" (UID: "5ccdbf22-7f0b-489c-bf4c-22ce230c429a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.460136 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9swp\" (UniqueName: \"kubernetes.io/projected/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-kube-api-access-f9swp\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.460202 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-util\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.460233 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ccdbf22-7f0b-489c-bf4c-22ce230c429a-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.884667 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" event={"ID":"5ccdbf22-7f0b-489c-bf4c-22ce230c429a","Type":"ContainerDied","Data":"db93553489feba4bf9f94bc71a5e55619b6d0ed06e4bc56b18ced2a336f669c8"} Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.884707 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db93553489feba4bf9f94bc71a5e55619b6d0ed06e4bc56b18ced2a336f669c8" Jan 26 12:51:44 crc kubenswrapper[4881]: I0126 12:51:44.885008 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp" Jan 26 12:51:46 crc kubenswrapper[4881]: I0126 12:51:46.915656 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:46 crc kubenswrapper[4881]: I0126 12:51:46.964894 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.278833 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8kkw"] Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.280225 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v8kkw" podUID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerName="registry-server" containerID="cri-o://47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c" gracePeriod=2 Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.732189 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.847868 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-catalog-content\") pod \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.847985 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n8zw\" (UniqueName: \"kubernetes.io/projected/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-kube-api-access-5n8zw\") pod \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.848021 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-utilities\") pod \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\" (UID: \"3ec115c7-7e39-4d8e-9845-9bffe652b5e3\") " Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.848913 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-utilities" (OuterVolumeSpecName: "utilities") pod "3ec115c7-7e39-4d8e-9845-9bffe652b5e3" (UID: "3ec115c7-7e39-4d8e-9845-9bffe652b5e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.856806 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-kube-api-access-5n8zw" (OuterVolumeSpecName: "kube-api-access-5n8zw") pod "3ec115c7-7e39-4d8e-9845-9bffe652b5e3" (UID: "3ec115c7-7e39-4d8e-9845-9bffe652b5e3"). InnerVolumeSpecName "kube-api-access-5n8zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.914399 4881 generic.go:334] "Generic (PLEG): container finished" podID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerID="47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c" exitCode=0 Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.914437 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8kkw" event={"ID":"3ec115c7-7e39-4d8e-9845-9bffe652b5e3","Type":"ContainerDied","Data":"47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c"} Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.914453 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8kkw" Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.914478 4881 scope.go:117] "RemoveContainer" containerID="47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c" Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.914463 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8kkw" event={"ID":"3ec115c7-7e39-4d8e-9845-9bffe652b5e3","Type":"ContainerDied","Data":"1c1cb9ef38bb1ef32c9557b287ee158b08ede46cccb806f8505d6363d6d74c8f"} Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.934362 4881 scope.go:117] "RemoveContainer" containerID="e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f" Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.949394 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n8zw\" (UniqueName: \"kubernetes.io/projected/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-kube-api-access-5n8zw\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.949452 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.966263 4881 scope.go:117] "RemoveContainer" containerID="77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1" Jan 26 12:51:49 crc kubenswrapper[4881]: I0126 12:51:49.976344 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ec115c7-7e39-4d8e-9845-9bffe652b5e3" (UID: "3ec115c7-7e39-4d8e-9845-9bffe652b5e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:51:50 crc kubenswrapper[4881]: I0126 12:51:49.998538 4881 scope.go:117] "RemoveContainer" containerID="47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c" Jan 26 12:51:50 crc kubenswrapper[4881]: E0126 12:51:49.999946 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c\": container with ID starting with 47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c not found: ID does not exist" containerID="47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c" Jan 26 12:51:50 crc kubenswrapper[4881]: I0126 12:51:49.999999 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c"} err="failed to get container status \"47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c\": rpc error: code = NotFound desc = could not find container \"47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c\": container with ID starting with 47d21b8178daf7e95602017eb8602f420e92c63dfae338028de32f26e64def0c not found: ID does not exist" Jan 26 12:51:50 crc kubenswrapper[4881]: I0126 12:51:50.000030 4881 scope.go:117] "RemoveContainer" containerID="e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f" Jan 26 12:51:50 crc kubenswrapper[4881]: E0126 12:51:50.001952 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f\": container with ID starting with e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f not found: ID does not exist" containerID="e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f" Jan 26 12:51:50 crc kubenswrapper[4881]: I0126 12:51:50.001980 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f"} err="failed to get container status \"e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f\": rpc error: code = NotFound desc = could not find container \"e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f\": container with ID starting with e1427ac57b928e04b65f326512b8988290218232a410725a0bd41e090baa0b3f not found: ID does not exist" Jan 26 12:51:50 crc kubenswrapper[4881]: I0126 12:51:50.001997 4881 scope.go:117] "RemoveContainer" containerID="77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1" Jan 26 12:51:50 crc kubenswrapper[4881]: E0126 12:51:50.002224 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1\": container with ID starting with 77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1 not found: ID does not exist" containerID="77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1" Jan 26 12:51:50 crc kubenswrapper[4881]: I0126 12:51:50.002244 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1"} err="failed to get container status \"77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1\": rpc error: code = NotFound desc = could not find container \"77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1\": container with ID starting with 77b68ce2b40320a607375cde72cb612940b18555b3c78b7a9ffb4292524a06c1 not found: ID does not exist" Jan 26 12:51:50 crc kubenswrapper[4881]: I0126 12:51:50.050637 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec115c7-7e39-4d8e-9845-9bffe652b5e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:51:50 crc kubenswrapper[4881]: I0126 12:51:50.236463 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8kkw"] Jan 26 12:51:50 crc kubenswrapper[4881]: I0126 12:51:50.247729 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v8kkw"] Jan 26 12:51:52 crc kubenswrapper[4881]: I0126 12:51:52.090025 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" path="/var/lib/kubelet/pods/3ec115c7-7e39-4d8e-9845-9bffe652b5e3/volumes" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.084610 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt"] Jan 26 12:51:53 crc kubenswrapper[4881]: E0126 12:51:53.084886 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerName="registry-server" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.084904 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerName="registry-server" Jan 26 12:51:53 crc kubenswrapper[4881]: E0126 12:51:53.084919 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerName="extract-utilities" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.084929 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerName="extract-utilities" Jan 26 12:51:53 crc kubenswrapper[4881]: E0126 12:51:53.084949 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccdbf22-7f0b-489c-bf4c-22ce230c429a" containerName="pull" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.084960 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccdbf22-7f0b-489c-bf4c-22ce230c429a" containerName="pull" Jan 26 12:51:53 crc kubenswrapper[4881]: E0126 12:51:53.084977 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccdbf22-7f0b-489c-bf4c-22ce230c429a" containerName="extract" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.084988 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccdbf22-7f0b-489c-bf4c-22ce230c429a" containerName="extract" Jan 26 12:51:53 crc kubenswrapper[4881]: E0126 12:51:53.085005 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccdbf22-7f0b-489c-bf4c-22ce230c429a" containerName="util" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.085016 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccdbf22-7f0b-489c-bf4c-22ce230c429a" containerName="util" Jan 26 12:51:53 crc kubenswrapper[4881]: E0126 12:51:53.085037 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerName="extract-content" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.085049 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerName="extract-content" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.085198 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec115c7-7e39-4d8e-9845-9bffe652b5e3" containerName="registry-server" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.085228 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ccdbf22-7f0b-489c-bf4c-22ce230c429a" containerName="extract" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.085782 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.087478 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.087817 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.088023 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-n6drv" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.098748 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt"] Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.186417 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrk9k\" (UniqueName: \"kubernetes.io/projected/134b4f14-ab8f-4d19-9c5d-90f2642a285e-kube-api-access-zrk9k\") pod \"obo-prometheus-operator-68bc856cb9-ss7tt\" (UID: \"134b4f14-ab8f-4d19-9c5d-90f2642a285e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.192266 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq"] Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.192933 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.198257 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.198978 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-dczng" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.229093 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq"] Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.229740 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.247751 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq"] Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.259980 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq"] Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.287043 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0dff4c41-61ac-4189-a67e-18689e873d2a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq\" (UID: \"0dff4c41-61ac-4189-a67e-18689e873d2a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.287127 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrk9k\" (UniqueName: \"kubernetes.io/projected/134b4f14-ab8f-4d19-9c5d-90f2642a285e-kube-api-access-zrk9k\") pod \"obo-prometheus-operator-68bc856cb9-ss7tt\" (UID: \"134b4f14-ab8f-4d19-9c5d-90f2642a285e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.287158 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0dff4c41-61ac-4189-a67e-18689e873d2a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq\" (UID: \"0dff4c41-61ac-4189-a67e-18689e873d2a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.312494 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrk9k\" (UniqueName: \"kubernetes.io/projected/134b4f14-ab8f-4d19-9c5d-90f2642a285e-kube-api-access-zrk9k\") pod \"obo-prometheus-operator-68bc856cb9-ss7tt\" (UID: \"134b4f14-ab8f-4d19-9c5d-90f2642a285e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.388232 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0dff4c41-61ac-4189-a67e-18689e873d2a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq\" (UID: \"0dff4c41-61ac-4189-a67e-18689e873d2a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.388572 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e46bce59-2bf0-4e4f-9988-351d4f1f6bc2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq\" (UID: \"e46bce59-2bf0-4e4f-9988-351d4f1f6bc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.388595 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e46bce59-2bf0-4e4f-9988-351d4f1f6bc2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq\" (UID: \"e46bce59-2bf0-4e4f-9988-351d4f1f6bc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.388632 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0dff4c41-61ac-4189-a67e-18689e873d2a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq\" (UID: \"0dff4c41-61ac-4189-a67e-18689e873d2a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.392506 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0dff4c41-61ac-4189-a67e-18689e873d2a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq\" (UID: \"0dff4c41-61ac-4189-a67e-18689e873d2a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.392667 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0dff4c41-61ac-4189-a67e-18689e873d2a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq\" (UID: \"0dff4c41-61ac-4189-a67e-18689e873d2a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.403652 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.407568 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tf2kf"] Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.408248 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.411113 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.411342 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-wjptc" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.421367 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tf2kf"] Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.489303 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vdq9\" (UniqueName: \"kubernetes.io/projected/b636a0bf-808d-4fce-9675-621381943903-kube-api-access-8vdq9\") pod \"observability-operator-59bdc8b94-tf2kf\" (UID: \"b636a0bf-808d-4fce-9675-621381943903\") " pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.489389 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e46bce59-2bf0-4e4f-9988-351d4f1f6bc2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq\" (UID: \"e46bce59-2bf0-4e4f-9988-351d4f1f6bc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.489415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e46bce59-2bf0-4e4f-9988-351d4f1f6bc2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq\" (UID: \"e46bce59-2bf0-4e4f-9988-351d4f1f6bc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.489432 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b636a0bf-808d-4fce-9675-621381943903-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tf2kf\" (UID: \"b636a0bf-808d-4fce-9675-621381943903\") " pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.494023 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e46bce59-2bf0-4e4f-9988-351d4f1f6bc2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq\" (UID: \"e46bce59-2bf0-4e4f-9988-351d4f1f6bc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.505776 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.506443 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e46bce59-2bf0-4e4f-9988-351d4f1f6bc2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq\" (UID: \"e46bce59-2bf0-4e4f-9988-351d4f1f6bc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.515679 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hprwm"] Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.516304 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hprwm" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.518385 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-9bnbv" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.533953 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hprwm"] Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.547777 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.592126 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b23a30cc-d92b-4491-963d-9f93d3b48547-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hprwm\" (UID: \"b23a30cc-d92b-4491-963d-9f93d3b48547\") " pod="openshift-operators/perses-operator-5bf474d74f-hprwm" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.592184 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b636a0bf-808d-4fce-9675-621381943903-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tf2kf\" (UID: \"b636a0bf-808d-4fce-9675-621381943903\") " pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.592230 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vdq9\" (UniqueName: \"kubernetes.io/projected/b636a0bf-808d-4fce-9675-621381943903-kube-api-access-8vdq9\") pod \"observability-operator-59bdc8b94-tf2kf\" (UID: \"b636a0bf-808d-4fce-9675-621381943903\") " pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.592253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rph47\" (UniqueName: \"kubernetes.io/projected/b23a30cc-d92b-4491-963d-9f93d3b48547-kube-api-access-rph47\") pod \"perses-operator-5bf474d74f-hprwm\" (UID: \"b23a30cc-d92b-4491-963d-9f93d3b48547\") " pod="openshift-operators/perses-operator-5bf474d74f-hprwm" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.599113 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b636a0bf-808d-4fce-9675-621381943903-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tf2kf\" (UID: \"b636a0bf-808d-4fce-9675-621381943903\") " pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.620296 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vdq9\" (UniqueName: \"kubernetes.io/projected/b636a0bf-808d-4fce-9675-621381943903-kube-api-access-8vdq9\") pod \"observability-operator-59bdc8b94-tf2kf\" (UID: \"b636a0bf-808d-4fce-9675-621381943903\") " pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.692872 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b23a30cc-d92b-4491-963d-9f93d3b48547-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hprwm\" (UID: \"b23a30cc-d92b-4491-963d-9f93d3b48547\") " pod="openshift-operators/perses-operator-5bf474d74f-hprwm" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.692955 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rph47\" (UniqueName: \"kubernetes.io/projected/b23a30cc-d92b-4491-963d-9f93d3b48547-kube-api-access-rph47\") pod \"perses-operator-5bf474d74f-hprwm\" (UID: \"b23a30cc-d92b-4491-963d-9f93d3b48547\") " pod="openshift-operators/perses-operator-5bf474d74f-hprwm" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.693897 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b23a30cc-d92b-4491-963d-9f93d3b48547-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hprwm\" (UID: \"b23a30cc-d92b-4491-963d-9f93d3b48547\") " pod="openshift-operators/perses-operator-5bf474d74f-hprwm" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.717194 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rph47\" (UniqueName: \"kubernetes.io/projected/b23a30cc-d92b-4491-963d-9f93d3b48547-kube-api-access-rph47\") pod \"perses-operator-5bf474d74f-hprwm\" (UID: \"b23a30cc-d92b-4491-963d-9f93d3b48547\") " pod="openshift-operators/perses-operator-5bf474d74f-hprwm" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.739472 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt"] Jan 26 12:51:53 crc kubenswrapper[4881]: W0126 12:51:53.749651 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod134b4f14_ab8f_4d19_9c5d_90f2642a285e.slice/crio-e7786340616ad9dfeba8beb050d27231cbc532ac939417515cf08d635ae4e185 WatchSource:0}: Error finding container e7786340616ad9dfeba8beb050d27231cbc532ac939417515cf08d635ae4e185: Status 404 returned error can't find the container with id e7786340616ad9dfeba8beb050d27231cbc532ac939417515cf08d635ae4e185 Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.754046 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.808215 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq"] Jan 26 12:51:53 crc kubenswrapper[4881]: W0126 12:51:53.829031 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dff4c41_61ac_4189_a67e_18689e873d2a.slice/crio-4436e443baddef69654134ddba150d6ef53296bba7ae6c6d12a5b68e50d86723 WatchSource:0}: Error finding container 4436e443baddef69654134ddba150d6ef53296bba7ae6c6d12a5b68e50d86723: Status 404 returned error can't find the container with id 4436e443baddef69654134ddba150d6ef53296bba7ae6c6d12a5b68e50d86723 Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.832403 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hprwm" Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.850217 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq"] Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.962474 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" event={"ID":"e46bce59-2bf0-4e4f-9988-351d4f1f6bc2","Type":"ContainerStarted","Data":"b91abef026bf5951a2738be23b468772570e42a07e7a8558a1b56357305c28ba"} Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.966184 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt" event={"ID":"134b4f14-ab8f-4d19-9c5d-90f2642a285e","Type":"ContainerStarted","Data":"e7786340616ad9dfeba8beb050d27231cbc532ac939417515cf08d635ae4e185"} Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.968195 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" event={"ID":"0dff4c41-61ac-4189-a67e-18689e873d2a","Type":"ContainerStarted","Data":"4436e443baddef69654134ddba150d6ef53296bba7ae6c6d12a5b68e50d86723"} Jan 26 12:51:53 crc kubenswrapper[4881]: I0126 12:51:53.981415 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tf2kf"] Jan 26 12:51:54 crc kubenswrapper[4881]: I0126 12:51:54.320625 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hprwm"] Jan 26 12:51:54 crc kubenswrapper[4881]: W0126 12:51:54.329818 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23a30cc_d92b_4491_963d_9f93d3b48547.slice/crio-57a4e8285bba1276f218cd06a2c13293dbe5a12b3396c1601b0e07222738ff8e WatchSource:0}: Error finding container 57a4e8285bba1276f218cd06a2c13293dbe5a12b3396c1601b0e07222738ff8e: Status 404 returned error can't find the container with id 57a4e8285bba1276f218cd06a2c13293dbe5a12b3396c1601b0e07222738ff8e Jan 26 12:51:54 crc kubenswrapper[4881]: I0126 12:51:54.982174 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-hprwm" event={"ID":"b23a30cc-d92b-4491-963d-9f93d3b48547","Type":"ContainerStarted","Data":"57a4e8285bba1276f218cd06a2c13293dbe5a12b3396c1601b0e07222738ff8e"} Jan 26 12:51:54 crc kubenswrapper[4881]: I0126 12:51:54.983877 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" event={"ID":"b636a0bf-808d-4fce-9675-621381943903","Type":"ContainerStarted","Data":"d317e61134b59e705cd50d7fd43695b22fcb4def9c786bbf169d40bb57e7048e"} Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.037657 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-hprwm" event={"ID":"b23a30cc-d92b-4491-963d-9f93d3b48547","Type":"ContainerStarted","Data":"5efe74903982c4cebcd469c3e03c19ad93a1feaa8ee5230ff6353e8d07e16fa0"} Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.038271 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-hprwm" Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.040073 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" event={"ID":"0dff4c41-61ac-4189-a67e-18689e873d2a","Type":"ContainerStarted","Data":"d71209020a8034c55bd752509e5e9c920bd436616aafa6945c1aee4e2b3eeae7"} Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.041683 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" event={"ID":"b636a0bf-808d-4fce-9675-621381943903","Type":"ContainerStarted","Data":"979fc7e262cbdf4c556d6c3341b7de01ba94ae6f918ae79b8942116e9a2c216a"} Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.042367 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.043537 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" event={"ID":"e46bce59-2bf0-4e4f-9988-351d4f1f6bc2","Type":"ContainerStarted","Data":"e7303d62d86464e44f741aded42f081fa5a6dd0eb9930a905dff958ec5431033"} Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.045348 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt" event={"ID":"134b4f14-ab8f-4d19-9c5d-90f2642a285e","Type":"ContainerStarted","Data":"e5014c3ff577d07468f286bdd48fcdb4406b5b9cdbaa364bfaea66f572d8aa1c"} Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.064259 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-hprwm" podStartSLOduration=1.9476447879999998 podStartE2EDuration="12.064245107s" podCreationTimestamp="2026-01-26 12:51:53 +0000 UTC" firstStartedPulling="2026-01-26 12:51:54.332033225 +0000 UTC m=+986.811343241" lastFinishedPulling="2026-01-26 12:52:04.448633544 +0000 UTC m=+996.927943560" observedRunningTime="2026-01-26 12:52:05.06025065 +0000 UTC m=+997.539560666" watchObservedRunningTime="2026-01-26 12:52:05.064245107 +0000 UTC m=+997.543555133" Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.106364 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq" podStartSLOduration=1.506357007 podStartE2EDuration="12.106347864s" podCreationTimestamp="2026-01-26 12:51:53 +0000 UTC" firstStartedPulling="2026-01-26 12:51:53.871242298 +0000 UTC m=+986.350552324" lastFinishedPulling="2026-01-26 12:52:04.471233155 +0000 UTC m=+996.950543181" observedRunningTime="2026-01-26 12:52:05.102771196 +0000 UTC m=+997.582081222" watchObservedRunningTime="2026-01-26 12:52:05.106347864 +0000 UTC m=+997.585657890" Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.106922 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" podStartSLOduration=1.62501739 podStartE2EDuration="12.106916507s" podCreationTimestamp="2026-01-26 12:51:53 +0000 UTC" firstStartedPulling="2026-01-26 12:51:54.005446721 +0000 UTC m=+986.484756747" lastFinishedPulling="2026-01-26 12:52:04.487345838 +0000 UTC m=+996.966655864" observedRunningTime="2026-01-26 12:52:05.087233257 +0000 UTC m=+997.566543283" watchObservedRunningTime="2026-01-26 12:52:05.106916507 +0000 UTC m=+997.586226533" Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.116902 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-tf2kf" Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.129337 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq" podStartSLOduration=1.5420460980000001 podStartE2EDuration="12.129317234s" podCreationTimestamp="2026-01-26 12:51:53 +0000 UTC" firstStartedPulling="2026-01-26 12:51:53.85981904 +0000 UTC m=+986.339129066" lastFinishedPulling="2026-01-26 12:52:04.447090176 +0000 UTC m=+996.926400202" observedRunningTime="2026-01-26 12:52:05.12542327 +0000 UTC m=+997.604733306" watchObservedRunningTime="2026-01-26 12:52:05.129317234 +0000 UTC m=+997.608627270" Jan 26 12:52:05 crc kubenswrapper[4881]: I0126 12:52:05.155957 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ss7tt" podStartSLOduration=1.461168535 podStartE2EDuration="12.155938373s" podCreationTimestamp="2026-01-26 12:51:53 +0000 UTC" firstStartedPulling="2026-01-26 12:51:53.752313448 +0000 UTC m=+986.231623474" lastFinishedPulling="2026-01-26 12:52:04.447083286 +0000 UTC m=+996.926393312" observedRunningTime="2026-01-26 12:52:05.155639226 +0000 UTC m=+997.634949252" watchObservedRunningTime="2026-01-26 12:52:05.155938373 +0000 UTC m=+997.635248409" Jan 26 12:52:13 crc kubenswrapper[4881]: I0126 12:52:13.836081 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-hprwm" Jan 26 12:52:24 crc kubenswrapper[4881]: I0126 12:52:24.789925 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:52:24 crc kubenswrapper[4881]: I0126 12:52:24.791176 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.556625 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx"] Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.558348 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.561459 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.586243 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx"] Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.679757 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.679831 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.679870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsqkm\" (UniqueName: \"kubernetes.io/projected/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-kube-api-access-qsqkm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.780667 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.780715 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.780746 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsqkm\" (UniqueName: \"kubernetes.io/projected/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-kube-api-access-qsqkm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.781307 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.781561 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.806849 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsqkm\" (UniqueName: \"kubernetes.io/projected/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-kube-api-access-qsqkm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:32 crc kubenswrapper[4881]: I0126 12:52:32.889935 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:33 crc kubenswrapper[4881]: I0126 12:52:33.203927 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx"] Jan 26 12:52:33 crc kubenswrapper[4881]: I0126 12:52:33.471034 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" event={"ID":"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f","Type":"ContainerStarted","Data":"6e62df26161eeb5a96f47f42ef5953ab0c4f162b4dd877627023d999cf37365d"} Jan 26 12:52:34 crc kubenswrapper[4881]: I0126 12:52:34.482385 4881 generic.go:334] "Generic (PLEG): container finished" podID="2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" containerID="df7081b3dc8518819da30645cca0ee00ea47255f1b175f1f0fc4875bd54a3c82" exitCode=0 Jan 26 12:52:34 crc kubenswrapper[4881]: I0126 12:52:34.482429 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" event={"ID":"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f","Type":"ContainerDied","Data":"df7081b3dc8518819da30645cca0ee00ea47255f1b175f1f0fc4875bd54a3c82"} Jan 26 12:52:39 crc kubenswrapper[4881]: I0126 12:52:39.523303 4881 generic.go:334] "Generic (PLEG): container finished" podID="2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" containerID="3dde39eb4923fde304fc5079d16223fe61fee286c034d0587a6f33df84b42d1b" exitCode=0 Jan 26 12:52:39 crc kubenswrapper[4881]: I0126 12:52:39.523391 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" event={"ID":"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f","Type":"ContainerDied","Data":"3dde39eb4923fde304fc5079d16223fe61fee286c034d0587a6f33df84b42d1b"} Jan 26 12:52:40 crc kubenswrapper[4881]: I0126 12:52:40.534319 4881 generic.go:334] "Generic (PLEG): container finished" podID="2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" containerID="82678babd7b2e5413317e4bb0034ebfa7155f5d270646163dcba4a71eb111d26" exitCode=0 Jan 26 12:52:40 crc kubenswrapper[4881]: I0126 12:52:40.534387 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" event={"ID":"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f","Type":"ContainerDied","Data":"82678babd7b2e5413317e4bb0034ebfa7155f5d270646163dcba4a71eb111d26"} Jan 26 12:52:41 crc kubenswrapper[4881]: I0126 12:52:41.839695 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:41 crc kubenswrapper[4881]: I0126 12:52:41.915166 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-bundle\") pod \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " Jan 26 12:52:41 crc kubenswrapper[4881]: I0126 12:52:41.915359 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-util\") pod \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " Jan 26 12:52:41 crc kubenswrapper[4881]: I0126 12:52:41.915501 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsqkm\" (UniqueName: \"kubernetes.io/projected/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-kube-api-access-qsqkm\") pod \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\" (UID: \"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f\") " Jan 26 12:52:41 crc kubenswrapper[4881]: I0126 12:52:41.916500 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-bundle" (OuterVolumeSpecName: "bundle") pod "2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" (UID: "2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:52:41 crc kubenswrapper[4881]: I0126 12:52:41.925833 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-kube-api-access-qsqkm" (OuterVolumeSpecName: "kube-api-access-qsqkm") pod "2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" (UID: "2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f"). InnerVolumeSpecName "kube-api-access-qsqkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:52:41 crc kubenswrapper[4881]: I0126 12:52:41.929975 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-util" (OuterVolumeSpecName: "util") pod "2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" (UID: "2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:52:42 crc kubenswrapper[4881]: I0126 12:52:42.017097 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-util\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:42 crc kubenswrapper[4881]: I0126 12:52:42.017152 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsqkm\" (UniqueName: \"kubernetes.io/projected/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-kube-api-access-qsqkm\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:42 crc kubenswrapper[4881]: I0126 12:52:42.017172 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:42 crc kubenswrapper[4881]: I0126 12:52:42.554923 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" event={"ID":"2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f","Type":"ContainerDied","Data":"6e62df26161eeb5a96f47f42ef5953ab0c4f162b4dd877627023d999cf37365d"} Jan 26 12:52:42 crc kubenswrapper[4881]: I0126 12:52:42.555302 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e62df26161eeb5a96f47f42ef5953ab0c4f162b4dd877627023d999cf37365d" Jan 26 12:52:42 crc kubenswrapper[4881]: I0126 12:52:42.555076 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.261299 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7bpbf"] Jan 26 12:52:49 crc kubenswrapper[4881]: E0126 12:52:49.261888 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" containerName="util" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.261899 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" containerName="util" Jan 26 12:52:49 crc kubenswrapper[4881]: E0126 12:52:49.261924 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" containerName="pull" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.261930 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" containerName="pull" Jan 26 12:52:49 crc kubenswrapper[4881]: E0126 12:52:49.261938 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" containerName="extract" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.261944 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" containerName="extract" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.262032 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f" containerName="extract" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.262398 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-7bpbf" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.265499 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.265625 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-q5r4z" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.265625 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.280875 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7bpbf"] Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.313751 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vmtg\" (UniqueName: \"kubernetes.io/projected/84cb8155-415d-4537-872c-bf03652861e0-kube-api-access-4vmtg\") pod \"nmstate-operator-646758c888-7bpbf\" (UID: \"84cb8155-415d-4537-872c-bf03652861e0\") " pod="openshift-nmstate/nmstate-operator-646758c888-7bpbf" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.414684 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vmtg\" (UniqueName: \"kubernetes.io/projected/84cb8155-415d-4537-872c-bf03652861e0-kube-api-access-4vmtg\") pod \"nmstate-operator-646758c888-7bpbf\" (UID: \"84cb8155-415d-4537-872c-bf03652861e0\") " pod="openshift-nmstate/nmstate-operator-646758c888-7bpbf" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.444350 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vmtg\" (UniqueName: \"kubernetes.io/projected/84cb8155-415d-4537-872c-bf03652861e0-kube-api-access-4vmtg\") pod \"nmstate-operator-646758c888-7bpbf\" (UID: \"84cb8155-415d-4537-872c-bf03652861e0\") " pod="openshift-nmstate/nmstate-operator-646758c888-7bpbf" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.579358 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-7bpbf" Jan 26 12:52:49 crc kubenswrapper[4881]: I0126 12:52:49.903570 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7bpbf"] Jan 26 12:52:50 crc kubenswrapper[4881]: I0126 12:52:50.633074 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-7bpbf" event={"ID":"84cb8155-415d-4537-872c-bf03652861e0","Type":"ContainerStarted","Data":"1d644e8f02c8b1954ef5e82bf8bbc40fd66591a1075d1b5018c4f70abd4f277a"} Jan 26 12:52:54 crc kubenswrapper[4881]: I0126 12:52:54.663033 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-7bpbf" event={"ID":"84cb8155-415d-4537-872c-bf03652861e0","Type":"ContainerStarted","Data":"55e45e53dc607a86acb9232c415c8fe48978d3f0d36d2779e1b4381e67e2d0e4"} Jan 26 12:52:54 crc kubenswrapper[4881]: I0126 12:52:54.693270 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-7bpbf" podStartSLOduration=2.098465566 podStartE2EDuration="5.693246213s" podCreationTimestamp="2026-01-26 12:52:49 +0000 UTC" firstStartedPulling="2026-01-26 12:52:49.914993584 +0000 UTC m=+1042.394303620" lastFinishedPulling="2026-01-26 12:52:53.509774241 +0000 UTC m=+1045.989084267" observedRunningTime="2026-01-26 12:52:54.6902447 +0000 UTC m=+1047.169554816" watchObservedRunningTime="2026-01-26 12:52:54.693246213 +0000 UTC m=+1047.172556269" Jan 26 12:52:54 crc kubenswrapper[4881]: I0126 12:52:54.789674 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:52:54 crc kubenswrapper[4881]: I0126 12:52:54.789731 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.707323 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4gxlq"] Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.708187 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-4gxlq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.710130 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xl49j" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.723452 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4gxlq"] Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.726669 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz"] Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.727476 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.729140 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.749251 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz"] Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.769291 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hv5dq"] Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.770077 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.804368 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48cp\" (UniqueName: \"kubernetes.io/projected/2a843206-a177-4422-be4f-bf5ccbdef9f1-kube-api-access-f48cp\") pod \"nmstate-metrics-54757c584b-4gxlq\" (UID: \"2a843206-a177-4422-be4f-bf5ccbdef9f1\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4gxlq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.804411 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/39762078-aa2c-44ae-8ed5-4ac22ebd62be-dbus-socket\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.804435 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/39762078-aa2c-44ae-8ed5-4ac22ebd62be-nmstate-lock\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.804453 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/39762078-aa2c-44ae-8ed5-4ac22ebd62be-ovs-socket\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.804581 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq675\" (UniqueName: \"kubernetes.io/projected/1869aca8-7499-4174-9154-588bbc7d5c24-kube-api-access-kq675\") pod \"nmstate-webhook-8474b5b9d8-mp8sz\" (UID: \"1869aca8-7499-4174-9154-588bbc7d5c24\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.804679 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1869aca8-7499-4174-9154-588bbc7d5c24-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mp8sz\" (UID: \"1869aca8-7499-4174-9154-588bbc7d5c24\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.804725 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf454\" (UniqueName: \"kubernetes.io/projected/39762078-aa2c-44ae-8ed5-4ac22ebd62be-kube-api-access-jf454\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.860207 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw"] Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.860851 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.862882 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-c9kw5" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.865700 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.865824 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.882573 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw"] Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906428 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/39762078-aa2c-44ae-8ed5-4ac22ebd62be-ovs-socket\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906478 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04037f03-d731-4b56-931b-6883929dc843-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-s8cmw\" (UID: \"04037f03-d731-4b56-931b-6883929dc843\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906542 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq675\" (UniqueName: \"kubernetes.io/projected/1869aca8-7499-4174-9154-588bbc7d5c24-kube-api-access-kq675\") pod \"nmstate-webhook-8474b5b9d8-mp8sz\" (UID: \"1869aca8-7499-4174-9154-588bbc7d5c24\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906581 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1869aca8-7499-4174-9154-588bbc7d5c24-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mp8sz\" (UID: \"1869aca8-7499-4174-9154-588bbc7d5c24\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906609 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/39762078-aa2c-44ae-8ed5-4ac22ebd62be-ovs-socket\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/04037f03-d731-4b56-931b-6883929dc843-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-s8cmw\" (UID: \"04037f03-d731-4b56-931b-6883929dc843\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906746 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf454\" (UniqueName: \"kubernetes.io/projected/39762078-aa2c-44ae-8ed5-4ac22ebd62be-kube-api-access-jf454\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906779 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq8wh\" (UniqueName: \"kubernetes.io/projected/04037f03-d731-4b56-931b-6883929dc843-kube-api-access-lq8wh\") pod \"nmstate-console-plugin-7754f76f8b-s8cmw\" (UID: \"04037f03-d731-4b56-931b-6883929dc843\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906878 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48cp\" (UniqueName: \"kubernetes.io/projected/2a843206-a177-4422-be4f-bf5ccbdef9f1-kube-api-access-f48cp\") pod \"nmstate-metrics-54757c584b-4gxlq\" (UID: \"2a843206-a177-4422-be4f-bf5ccbdef9f1\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4gxlq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906936 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/39762078-aa2c-44ae-8ed5-4ac22ebd62be-dbus-socket\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.906982 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/39762078-aa2c-44ae-8ed5-4ac22ebd62be-nmstate-lock\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.907278 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/39762078-aa2c-44ae-8ed5-4ac22ebd62be-nmstate-lock\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.907395 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/39762078-aa2c-44ae-8ed5-4ac22ebd62be-dbus-socket\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.912395 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1869aca8-7499-4174-9154-588bbc7d5c24-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mp8sz\" (UID: \"1869aca8-7499-4174-9154-588bbc7d5c24\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.923127 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48cp\" (UniqueName: \"kubernetes.io/projected/2a843206-a177-4422-be4f-bf5ccbdef9f1-kube-api-access-f48cp\") pod \"nmstate-metrics-54757c584b-4gxlq\" (UID: \"2a843206-a177-4422-be4f-bf5ccbdef9f1\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4gxlq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.929444 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf454\" (UniqueName: \"kubernetes.io/projected/39762078-aa2c-44ae-8ed5-4ac22ebd62be-kube-api-access-jf454\") pod \"nmstate-handler-hv5dq\" (UID: \"39762078-aa2c-44ae-8ed5-4ac22ebd62be\") " pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:55 crc kubenswrapper[4881]: I0126 12:52:55.931560 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq675\" (UniqueName: \"kubernetes.io/projected/1869aca8-7499-4174-9154-588bbc7d5c24-kube-api-access-kq675\") pod \"nmstate-webhook-8474b5b9d8-mp8sz\" (UID: \"1869aca8-7499-4174-9154-588bbc7d5c24\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.008851 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04037f03-d731-4b56-931b-6883929dc843-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-s8cmw\" (UID: \"04037f03-d731-4b56-931b-6883929dc843\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.009142 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/04037f03-d731-4b56-931b-6883929dc843-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-s8cmw\" (UID: \"04037f03-d731-4b56-931b-6883929dc843\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:56 crc kubenswrapper[4881]: E0126 12:52:56.009048 4881 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.009176 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq8wh\" (UniqueName: \"kubernetes.io/projected/04037f03-d731-4b56-931b-6883929dc843-kube-api-access-lq8wh\") pod \"nmstate-console-plugin-7754f76f8b-s8cmw\" (UID: \"04037f03-d731-4b56-931b-6883929dc843\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:56 crc kubenswrapper[4881]: E0126 12:52:56.009234 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04037f03-d731-4b56-931b-6883929dc843-plugin-serving-cert podName:04037f03-d731-4b56-931b-6883929dc843 nodeName:}" failed. No retries permitted until 2026-01-26 12:52:56.509215736 +0000 UTC m=+1048.988525762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/04037f03-d731-4b56-931b-6883929dc843-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-s8cmw" (UID: "04037f03-d731-4b56-931b-6883929dc843") : secret "plugin-serving-cert" not found Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.010308 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/04037f03-d731-4b56-931b-6883929dc843-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-s8cmw\" (UID: \"04037f03-d731-4b56-931b-6883929dc843\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.026570 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq8wh\" (UniqueName: \"kubernetes.io/projected/04037f03-d731-4b56-931b-6883929dc843-kube-api-access-lq8wh\") pod \"nmstate-console-plugin-7754f76f8b-s8cmw\" (UID: \"04037f03-d731-4b56-931b-6883929dc843\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.045496 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-4gxlq" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.051024 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76db6c8975-vskdl"] Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.051975 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.059591 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.064847 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76db6c8975-vskdl"] Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.081248 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.110885 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-console-oauth-config\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.110953 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-oauth-serving-cert\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.110975 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh95w\" (UniqueName: \"kubernetes.io/projected/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-kube-api-access-hh95w\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.111029 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-trusted-ca-bundle\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.111058 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-console-serving-cert\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.111080 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-console-config\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.111105 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-service-ca\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: W0126 12:52:56.119027 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39762078_aa2c_44ae_8ed5_4ac22ebd62be.slice/crio-021f515d56e2cc0eca8a9e299dff75bd646990b5a0906b5af84f40feb43a035e WatchSource:0}: Error finding container 021f515d56e2cc0eca8a9e299dff75bd646990b5a0906b5af84f40feb43a035e: Status 404 returned error can't find the container with id 021f515d56e2cc0eca8a9e299dff75bd646990b5a0906b5af84f40feb43a035e Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.212253 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-console-oauth-config\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.212531 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-oauth-serving-cert\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.212551 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh95w\" (UniqueName: \"kubernetes.io/projected/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-kube-api-access-hh95w\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.212594 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-trusted-ca-bundle\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.212622 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-console-serving-cert\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.212642 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-console-config\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.212663 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-service-ca\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.213616 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-service-ca\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.213680 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-console-config\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.214312 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-trusted-ca-bundle\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.215041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-oauth-serving-cert\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.218057 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-console-serving-cert\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.218179 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-console-oauth-config\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.230307 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh95w\" (UniqueName: \"kubernetes.io/projected/6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb-kube-api-access-hh95w\") pod \"console-76db6c8975-vskdl\" (UID: \"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb\") " pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.264392 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4gxlq"] Jan 26 12:52:56 crc kubenswrapper[4881]: W0126 12:52:56.266343 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a843206_a177_4422_be4f_bf5ccbdef9f1.slice/crio-8f9ee792786748f6d7962065a366d54f61b1aa71006309f38ee42694395c82fc WatchSource:0}: Error finding container 8f9ee792786748f6d7962065a366d54f61b1aa71006309f38ee42694395c82fc: Status 404 returned error can't find the container with id 8f9ee792786748f6d7962065a366d54f61b1aa71006309f38ee42694395c82fc Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.384963 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.517123 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04037f03-d731-4b56-931b-6883929dc843-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-s8cmw\" (UID: \"04037f03-d731-4b56-931b-6883929dc843\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.522929 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04037f03-d731-4b56-931b-6883929dc843-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-s8cmw\" (UID: \"04037f03-d731-4b56-931b-6883929dc843\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.532236 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz"] Jan 26 12:52:56 crc kubenswrapper[4881]: W0126 12:52:56.651215 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ff035b6_80f8_44d2_b7d7_0ac1c98a83eb.slice/crio-4585cd9232b4badc8c32e826359c0e69f93f2b300b75a24669bff608a7013286 WatchSource:0}: Error finding container 4585cd9232b4badc8c32e826359c0e69f93f2b300b75a24669bff608a7013286: Status 404 returned error can't find the container with id 4585cd9232b4badc8c32e826359c0e69f93f2b300b75a24669bff608a7013286 Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.652253 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76db6c8975-vskdl"] Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.676872 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hv5dq" event={"ID":"39762078-aa2c-44ae-8ed5-4ac22ebd62be","Type":"ContainerStarted","Data":"021f515d56e2cc0eca8a9e299dff75bd646990b5a0906b5af84f40feb43a035e"} Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.678483 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76db6c8975-vskdl" event={"ID":"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb","Type":"ContainerStarted","Data":"4585cd9232b4badc8c32e826359c0e69f93f2b300b75a24669bff608a7013286"} Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.679996 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" event={"ID":"1869aca8-7499-4174-9154-588bbc7d5c24","Type":"ContainerStarted","Data":"ecfa57744ac19fabaa55ee65688d584fbed15e037fc13da5cc3384e0b2d8a4ce"} Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.681143 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4gxlq" event={"ID":"2a843206-a177-4422-be4f-bf5ccbdef9f1","Type":"ContainerStarted","Data":"8f9ee792786748f6d7962065a366d54f61b1aa71006309f38ee42694395c82fc"} Jan 26 12:52:56 crc kubenswrapper[4881]: I0126 12:52:56.773939 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" Jan 26 12:52:57 crc kubenswrapper[4881]: I0126 12:52:57.084015 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw"] Jan 26 12:52:57 crc kubenswrapper[4881]: I0126 12:52:57.690966 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76db6c8975-vskdl" event={"ID":"6ff035b6-80f8-44d2-b7d7-0ac1c98a83eb","Type":"ContainerStarted","Data":"55770d0fba353d3d0b6706ed3e4e382ff322e32dc571c90d2b970b5ae25aa671"} Jan 26 12:52:57 crc kubenswrapper[4881]: I0126 12:52:57.692383 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" event={"ID":"04037f03-d731-4b56-931b-6883929dc843","Type":"ContainerStarted","Data":"6f5d4275d78bae0b8c7534563e5457984fe857f38e480905f8668ec84baf16e6"} Jan 26 12:52:57 crc kubenswrapper[4881]: I0126 12:52:57.722551 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76db6c8975-vskdl" podStartSLOduration=1.722440067 podStartE2EDuration="1.722440067s" podCreationTimestamp="2026-01-26 12:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:52:57.713627252 +0000 UTC m=+1050.192937278" watchObservedRunningTime="2026-01-26 12:52:57.722440067 +0000 UTC m=+1050.201750133" Jan 26 12:52:59 crc kubenswrapper[4881]: I0126 12:52:59.710149 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4gxlq" event={"ID":"2a843206-a177-4422-be4f-bf5ccbdef9f1","Type":"ContainerStarted","Data":"a6920bc20af4dd264f2e1da3390dfa24887554fb52595cdddf9629faeeaefb36"} Jan 26 12:52:59 crc kubenswrapper[4881]: I0126 12:52:59.713818 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hv5dq" event={"ID":"39762078-aa2c-44ae-8ed5-4ac22ebd62be","Type":"ContainerStarted","Data":"c0eee11f7add130f51f6de086798e8778a71e3a420ea0e12f4d70268c6084984"} Jan 26 12:52:59 crc kubenswrapper[4881]: I0126 12:52:59.713938 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:52:59 crc kubenswrapper[4881]: I0126 12:52:59.716296 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" event={"ID":"1869aca8-7499-4174-9154-588bbc7d5c24","Type":"ContainerStarted","Data":"bccac7b101865d017346c3be06592b21bd6a539f98c18f66d0fc066e4099c76e"} Jan 26 12:52:59 crc kubenswrapper[4881]: I0126 12:52:59.716496 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" Jan 26 12:52:59 crc kubenswrapper[4881]: I0126 12:52:59.750135 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" podStartSLOduration=2.505971108 podStartE2EDuration="4.750119177s" podCreationTimestamp="2026-01-26 12:52:55 +0000 UTC" firstStartedPulling="2026-01-26 12:52:56.549587874 +0000 UTC m=+1049.028897940" lastFinishedPulling="2026-01-26 12:52:58.793735973 +0000 UTC m=+1051.273046009" observedRunningTime="2026-01-26 12:52:59.748140179 +0000 UTC m=+1052.227450205" watchObservedRunningTime="2026-01-26 12:52:59.750119177 +0000 UTC m=+1052.229429203" Jan 26 12:52:59 crc kubenswrapper[4881]: I0126 12:52:59.751496 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hv5dq" podStartSLOduration=2.059065389 podStartE2EDuration="4.75147181s" podCreationTimestamp="2026-01-26 12:52:55 +0000 UTC" firstStartedPulling="2026-01-26 12:52:56.123718168 +0000 UTC m=+1048.603028194" lastFinishedPulling="2026-01-26 12:52:58.816124579 +0000 UTC m=+1051.295434615" observedRunningTime="2026-01-26 12:52:59.730741645 +0000 UTC m=+1052.210051701" watchObservedRunningTime="2026-01-26 12:52:59.75147181 +0000 UTC m=+1052.230781836" Jan 26 12:53:00 crc kubenswrapper[4881]: I0126 12:53:00.724114 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" event={"ID":"04037f03-d731-4b56-931b-6883929dc843","Type":"ContainerStarted","Data":"06c4f737f841a051b2e8b821896409939ad272c7ed40e93adbb9a014a8f3c5b5"} Jan 26 12:53:00 crc kubenswrapper[4881]: I0126 12:53:00.741030 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s8cmw" podStartSLOduration=2.400812613 podStartE2EDuration="5.741008482s" podCreationTimestamp="2026-01-26 12:52:55 +0000 UTC" firstStartedPulling="2026-01-26 12:52:57.107165992 +0000 UTC m=+1049.586476028" lastFinishedPulling="2026-01-26 12:53:00.447361871 +0000 UTC m=+1052.926671897" observedRunningTime="2026-01-26 12:53:00.737930557 +0000 UTC m=+1053.217240613" watchObservedRunningTime="2026-01-26 12:53:00.741008482 +0000 UTC m=+1053.220318548" Jan 26 12:53:01 crc kubenswrapper[4881]: I0126 12:53:01.734471 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4gxlq" event={"ID":"2a843206-a177-4422-be4f-bf5ccbdef9f1","Type":"ContainerStarted","Data":"af03ae272ec0aec06bc63da98a0f31b8c3bda7a5b43f6531756143ceda46139b"} Jan 26 12:53:01 crc kubenswrapper[4881]: I0126 12:53:01.762747 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-4gxlq" podStartSLOduration=1.669403627 podStartE2EDuration="6.76273271s" podCreationTimestamp="2026-01-26 12:52:55 +0000 UTC" firstStartedPulling="2026-01-26 12:52:56.268329175 +0000 UTC m=+1048.747639201" lastFinishedPulling="2026-01-26 12:53:01.361658258 +0000 UTC m=+1053.840968284" observedRunningTime="2026-01-26 12:53:01.760350851 +0000 UTC m=+1054.239660887" watchObservedRunningTime="2026-01-26 12:53:01.76273271 +0000 UTC m=+1054.242042736" Jan 26 12:53:06 crc kubenswrapper[4881]: I0126 12:53:06.131334 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hv5dq" Jan 26 12:53:06 crc kubenswrapper[4881]: I0126 12:53:06.385323 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:53:06 crc kubenswrapper[4881]: I0126 12:53:06.385380 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:53:06 crc kubenswrapper[4881]: I0126 12:53:06.393888 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:53:06 crc kubenswrapper[4881]: I0126 12:53:06.781498 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76db6c8975-vskdl" Jan 26 12:53:06 crc kubenswrapper[4881]: I0126 12:53:06.863448 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cwb4s"] Jan 26 12:53:16 crc kubenswrapper[4881]: I0126 12:53:16.069724 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mp8sz" Jan 26 12:53:24 crc kubenswrapper[4881]: I0126 12:53:24.790235 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:53:24 crc kubenswrapper[4881]: I0126 12:53:24.791168 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:53:24 crc kubenswrapper[4881]: I0126 12:53:24.791252 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:53:24 crc kubenswrapper[4881]: I0126 12:53:24.792268 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4632219e9eec673e5473a279c2fc2f5646ce521828ec90d160527916fe6cbc92"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:53:24 crc kubenswrapper[4881]: I0126 12:53:24.792379 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://4632219e9eec673e5473a279c2fc2f5646ce521828ec90d160527916fe6cbc92" gracePeriod=600 Jan 26 12:53:25 crc kubenswrapper[4881]: I0126 12:53:25.914781 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="4632219e9eec673e5473a279c2fc2f5646ce521828ec90d160527916fe6cbc92" exitCode=0 Jan 26 12:53:25 crc kubenswrapper[4881]: I0126 12:53:25.914858 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"4632219e9eec673e5473a279c2fc2f5646ce521828ec90d160527916fe6cbc92"} Jan 26 12:53:25 crc kubenswrapper[4881]: I0126 12:53:25.915347 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"765ce5b5bd8e274c5be74ed55cfedb20d522de35e2e3c381e48dd1db3daac475"} Jan 26 12:53:25 crc kubenswrapper[4881]: I0126 12:53:25.915389 4881 scope.go:117] "RemoveContainer" containerID="658714d5f6b987a3a780334d9ffc01082e8c5ae88ec8118662e97cc52e9126d6" Jan 26 12:53:31 crc kubenswrapper[4881]: I0126 12:53:31.928307 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-cwb4s" podUID="de07a342-44f0-45cc-a461-5fd5a70e34d9" containerName="console" containerID="cri-o://53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e" gracePeriod=15 Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.349664 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cwb4s_de07a342-44f0-45cc-a461-5fd5a70e34d9/console/0.log" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.349930 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.533487 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-config\") pod \"de07a342-44f0-45cc-a461-5fd5a70e34d9\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.533568 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-trusted-ca-bundle\") pod \"de07a342-44f0-45cc-a461-5fd5a70e34d9\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.533649 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-serving-cert\") pod \"de07a342-44f0-45cc-a461-5fd5a70e34d9\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.533698 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xl7f\" (UniqueName: \"kubernetes.io/projected/de07a342-44f0-45cc-a461-5fd5a70e34d9-kube-api-access-9xl7f\") pod \"de07a342-44f0-45cc-a461-5fd5a70e34d9\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.533726 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-oauth-config\") pod \"de07a342-44f0-45cc-a461-5fd5a70e34d9\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.533755 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-oauth-serving-cert\") pod \"de07a342-44f0-45cc-a461-5fd5a70e34d9\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.533776 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-service-ca\") pod \"de07a342-44f0-45cc-a461-5fd5a70e34d9\" (UID: \"de07a342-44f0-45cc-a461-5fd5a70e34d9\") " Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.535140 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "de07a342-44f0-45cc-a461-5fd5a70e34d9" (UID: "de07a342-44f0-45cc-a461-5fd5a70e34d9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.535242 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "de07a342-44f0-45cc-a461-5fd5a70e34d9" (UID: "de07a342-44f0-45cc-a461-5fd5a70e34d9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.535373 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-config" (OuterVolumeSpecName: "console-config") pod "de07a342-44f0-45cc-a461-5fd5a70e34d9" (UID: "de07a342-44f0-45cc-a461-5fd5a70e34d9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.535690 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-service-ca" (OuterVolumeSpecName: "service-ca") pod "de07a342-44f0-45cc-a461-5fd5a70e34d9" (UID: "de07a342-44f0-45cc-a461-5fd5a70e34d9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.536045 4881 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.536075 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.536088 4881 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.536099 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de07a342-44f0-45cc-a461-5fd5a70e34d9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.539833 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de07a342-44f0-45cc-a461-5fd5a70e34d9-kube-api-access-9xl7f" (OuterVolumeSpecName: "kube-api-access-9xl7f") pod "de07a342-44f0-45cc-a461-5fd5a70e34d9" (UID: "de07a342-44f0-45cc-a461-5fd5a70e34d9"). InnerVolumeSpecName "kube-api-access-9xl7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.539891 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "de07a342-44f0-45cc-a461-5fd5a70e34d9" (UID: "de07a342-44f0-45cc-a461-5fd5a70e34d9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.540050 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "de07a342-44f0-45cc-a461-5fd5a70e34d9" (UID: "de07a342-44f0-45cc-a461-5fd5a70e34d9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.636954 4881 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.636998 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xl7f\" (UniqueName: \"kubernetes.io/projected/de07a342-44f0-45cc-a461-5fd5a70e34d9-kube-api-access-9xl7f\") on node \"crc\" DevicePath \"\"" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.637010 4881 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de07a342-44f0-45cc-a461-5fd5a70e34d9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.976147 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cwb4s_de07a342-44f0-45cc-a461-5fd5a70e34d9/console/0.log" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.976195 4881 generic.go:334] "Generic (PLEG): container finished" podID="de07a342-44f0-45cc-a461-5fd5a70e34d9" containerID="53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e" exitCode=2 Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.976224 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cwb4s" event={"ID":"de07a342-44f0-45cc-a461-5fd5a70e34d9","Type":"ContainerDied","Data":"53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e"} Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.976251 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cwb4s" event={"ID":"de07a342-44f0-45cc-a461-5fd5a70e34d9","Type":"ContainerDied","Data":"43a5c54a12bf8ec7474414eff104a73582e9ce3c94ecaa73d5e943e49b4f9a27"} Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.976285 4881 scope.go:117] "RemoveContainer" containerID="53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e" Jan 26 12:53:32 crc kubenswrapper[4881]: I0126 12:53:32.976357 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cwb4s" Jan 26 12:53:33 crc kubenswrapper[4881]: I0126 12:53:33.022080 4881 scope.go:117] "RemoveContainer" containerID="53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e" Jan 26 12:53:33 crc kubenswrapper[4881]: E0126 12:53:33.023105 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e\": container with ID starting with 53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e not found: ID does not exist" containerID="53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e" Jan 26 12:53:33 crc kubenswrapper[4881]: I0126 12:53:33.023160 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e"} err="failed to get container status \"53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e\": rpc error: code = NotFound desc = could not find container \"53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e\": container with ID starting with 53d04472eabb286f5412f60fb4e75154e1dad697ac8892a4ab7a02bc83a6bc9e not found: ID does not exist" Jan 26 12:53:33 crc kubenswrapper[4881]: I0126 12:53:33.033655 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cwb4s"] Jan 26 12:53:33 crc kubenswrapper[4881]: I0126 12:53:33.033979 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-cwb4s"] Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.025369 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t"] Jan 26 12:53:34 crc kubenswrapper[4881]: E0126 12:53:34.025862 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de07a342-44f0-45cc-a461-5fd5a70e34d9" containerName="console" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.025875 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="de07a342-44f0-45cc-a461-5fd5a70e34d9" containerName="console" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.025976 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="de07a342-44f0-45cc-a461-5fd5a70e34d9" containerName="console" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.026731 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.028342 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.037374 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t"] Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.085173 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8l4j\" (UniqueName: \"kubernetes.io/projected/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-kube-api-access-x8l4j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.085275 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.085301 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.093928 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de07a342-44f0-45cc-a461-5fd5a70e34d9" path="/var/lib/kubelet/pods/de07a342-44f0-45cc-a461-5fd5a70e34d9/volumes" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.187151 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.187643 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.187694 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.188373 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8l4j\" (UniqueName: \"kubernetes.io/projected/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-kube-api-access-x8l4j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.188492 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.221157 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8l4j\" (UniqueName: \"kubernetes.io/projected/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-kube-api-access-x8l4j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.389901 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:34 crc kubenswrapper[4881]: I0126 12:53:34.869376 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t"] Jan 26 12:53:35 crc kubenswrapper[4881]: I0126 12:53:35.004027 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" event={"ID":"d1afd28b-d9f1-4ee3-aa69-85a1c759161a","Type":"ContainerStarted","Data":"b04bd1fdcec6bf7fe45ea9c808ff1b86f98df3ea54bc620deb22c5caf4cba8ed"} Jan 26 12:53:36 crc kubenswrapper[4881]: I0126 12:53:36.014217 4881 generic.go:334] "Generic (PLEG): container finished" podID="d1afd28b-d9f1-4ee3-aa69-85a1c759161a" containerID="ae1df46fb71b2142052576fa70679f821341ea86714ac04154c2299f43df0c19" exitCode=0 Jan 26 12:53:36 crc kubenswrapper[4881]: I0126 12:53:36.014344 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" event={"ID":"d1afd28b-d9f1-4ee3-aa69-85a1c759161a","Type":"ContainerDied","Data":"ae1df46fb71b2142052576fa70679f821341ea86714ac04154c2299f43df0c19"} Jan 26 12:53:38 crc kubenswrapper[4881]: I0126 12:53:38.030260 4881 generic.go:334] "Generic (PLEG): container finished" podID="d1afd28b-d9f1-4ee3-aa69-85a1c759161a" containerID="08e0f80658e9f143b7eff2eb15cae432b252140e2f6ffa608250745e9c7efecc" exitCode=0 Jan 26 12:53:38 crc kubenswrapper[4881]: I0126 12:53:38.030331 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" event={"ID":"d1afd28b-d9f1-4ee3-aa69-85a1c759161a","Type":"ContainerDied","Data":"08e0f80658e9f143b7eff2eb15cae432b252140e2f6ffa608250745e9c7efecc"} Jan 26 12:53:39 crc kubenswrapper[4881]: I0126 12:53:39.044434 4881 generic.go:334] "Generic (PLEG): container finished" podID="d1afd28b-d9f1-4ee3-aa69-85a1c759161a" containerID="793878f7891d773831c9e06a512c1efd681e4ce0eb679c574b4a887eeb1f23ec" exitCode=0 Jan 26 12:53:39 crc kubenswrapper[4881]: I0126 12:53:39.044707 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" event={"ID":"d1afd28b-d9f1-4ee3-aa69-85a1c759161a","Type":"ContainerDied","Data":"793878f7891d773831c9e06a512c1efd681e4ce0eb679c574b4a887eeb1f23ec"} Jan 26 12:53:40 crc kubenswrapper[4881]: I0126 12:53:40.351333 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:40 crc kubenswrapper[4881]: I0126 12:53:40.379254 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-util\") pod \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " Jan 26 12:53:40 crc kubenswrapper[4881]: I0126 12:53:40.379314 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8l4j\" (UniqueName: \"kubernetes.io/projected/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-kube-api-access-x8l4j\") pod \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " Jan 26 12:53:40 crc kubenswrapper[4881]: I0126 12:53:40.379416 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-bundle\") pod \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\" (UID: \"d1afd28b-d9f1-4ee3-aa69-85a1c759161a\") " Jan 26 12:53:40 crc kubenswrapper[4881]: I0126 12:53:40.381010 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-bundle" (OuterVolumeSpecName: "bundle") pod "d1afd28b-d9f1-4ee3-aa69-85a1c759161a" (UID: "d1afd28b-d9f1-4ee3-aa69-85a1c759161a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:53:40 crc kubenswrapper[4881]: I0126 12:53:40.387027 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-kube-api-access-x8l4j" (OuterVolumeSpecName: "kube-api-access-x8l4j") pod "d1afd28b-d9f1-4ee3-aa69-85a1c759161a" (UID: "d1afd28b-d9f1-4ee3-aa69-85a1c759161a"). InnerVolumeSpecName "kube-api-access-x8l4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:53:40 crc kubenswrapper[4881]: I0126 12:53:40.407593 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-util" (OuterVolumeSpecName: "util") pod "d1afd28b-d9f1-4ee3-aa69-85a1c759161a" (UID: "d1afd28b-d9f1-4ee3-aa69-85a1c759161a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:53:40 crc kubenswrapper[4881]: I0126 12:53:40.480873 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-util\") on node \"crc\" DevicePath \"\"" Jan 26 12:53:40 crc kubenswrapper[4881]: I0126 12:53:40.480924 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8l4j\" (UniqueName: \"kubernetes.io/projected/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-kube-api-access-x8l4j\") on node \"crc\" DevicePath \"\"" Jan 26 12:53:40 crc kubenswrapper[4881]: I0126 12:53:40.480947 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d1afd28b-d9f1-4ee3-aa69-85a1c759161a-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:53:41 crc kubenswrapper[4881]: I0126 12:53:41.063081 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" event={"ID":"d1afd28b-d9f1-4ee3-aa69-85a1c759161a","Type":"ContainerDied","Data":"b04bd1fdcec6bf7fe45ea9c808ff1b86f98df3ea54bc620deb22c5caf4cba8ed"} Jan 26 12:53:41 crc kubenswrapper[4881]: I0126 12:53:41.063139 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b04bd1fdcec6bf7fe45ea9c808ff1b86f98df3ea54bc620deb22c5caf4cba8ed" Jan 26 12:53:41 crc kubenswrapper[4881]: I0126 12:53:41.063238 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.327581 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh"] Jan 26 12:53:53 crc kubenswrapper[4881]: E0126 12:53:53.328189 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1afd28b-d9f1-4ee3-aa69-85a1c759161a" containerName="extract" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.328200 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1afd28b-d9f1-4ee3-aa69-85a1c759161a" containerName="extract" Jan 26 12:53:53 crc kubenswrapper[4881]: E0126 12:53:53.328210 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1afd28b-d9f1-4ee3-aa69-85a1c759161a" containerName="util" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.328215 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1afd28b-d9f1-4ee3-aa69-85a1c759161a" containerName="util" Jan 26 12:53:53 crc kubenswrapper[4881]: E0126 12:53:53.328226 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1afd28b-d9f1-4ee3-aa69-85a1c759161a" containerName="pull" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.328232 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1afd28b-d9f1-4ee3-aa69-85a1c759161a" containerName="pull" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.328325 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1afd28b-d9f1-4ee3-aa69-85a1c759161a" containerName="extract" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.328722 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.330964 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.331077 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-x7bzp" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.331225 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.331746 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.332594 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.348301 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh"] Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.362883 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2483eb0f-5e2f-4df8-8385-4095077aa351-apiservice-cert\") pod \"metallb-operator-controller-manager-79649b4ffb-kpsrh\" (UID: \"2483eb0f-5e2f-4df8-8385-4095077aa351\") " pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.362939 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84jz\" (UniqueName: \"kubernetes.io/projected/2483eb0f-5e2f-4df8-8385-4095077aa351-kube-api-access-r84jz\") pod \"metallb-operator-controller-manager-79649b4ffb-kpsrh\" (UID: \"2483eb0f-5e2f-4df8-8385-4095077aa351\") " pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.362969 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2483eb0f-5e2f-4df8-8385-4095077aa351-webhook-cert\") pod \"metallb-operator-controller-manager-79649b4ffb-kpsrh\" (UID: \"2483eb0f-5e2f-4df8-8385-4095077aa351\") " pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.463741 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2483eb0f-5e2f-4df8-8385-4095077aa351-apiservice-cert\") pod \"metallb-operator-controller-manager-79649b4ffb-kpsrh\" (UID: \"2483eb0f-5e2f-4df8-8385-4095077aa351\") " pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.463814 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r84jz\" (UniqueName: \"kubernetes.io/projected/2483eb0f-5e2f-4df8-8385-4095077aa351-kube-api-access-r84jz\") pod \"metallb-operator-controller-manager-79649b4ffb-kpsrh\" (UID: \"2483eb0f-5e2f-4df8-8385-4095077aa351\") " pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.463859 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2483eb0f-5e2f-4df8-8385-4095077aa351-webhook-cert\") pod \"metallb-operator-controller-manager-79649b4ffb-kpsrh\" (UID: \"2483eb0f-5e2f-4df8-8385-4095077aa351\") " pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.468906 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2483eb0f-5e2f-4df8-8385-4095077aa351-webhook-cert\") pod \"metallb-operator-controller-manager-79649b4ffb-kpsrh\" (UID: \"2483eb0f-5e2f-4df8-8385-4095077aa351\") " pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.469222 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2483eb0f-5e2f-4df8-8385-4095077aa351-apiservice-cert\") pod \"metallb-operator-controller-manager-79649b4ffb-kpsrh\" (UID: \"2483eb0f-5e2f-4df8-8385-4095077aa351\") " pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.484424 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r84jz\" (UniqueName: \"kubernetes.io/projected/2483eb0f-5e2f-4df8-8385-4095077aa351-kube-api-access-r84jz\") pod \"metallb-operator-controller-manager-79649b4ffb-kpsrh\" (UID: \"2483eb0f-5e2f-4df8-8385-4095077aa351\") " pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.566917 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66"] Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.567602 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.569668 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nnl52" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.569748 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.569761 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.581166 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66"] Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.642037 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.766201 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a51e914-e793-4f03-b58a-65628089e71a-apiservice-cert\") pod \"metallb-operator-webhook-server-64dc64df49-qlh66\" (UID: \"1a51e914-e793-4f03-b58a-65628089e71a\") " pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.766597 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n74vr\" (UniqueName: \"kubernetes.io/projected/1a51e914-e793-4f03-b58a-65628089e71a-kube-api-access-n74vr\") pod \"metallb-operator-webhook-server-64dc64df49-qlh66\" (UID: \"1a51e914-e793-4f03-b58a-65628089e71a\") " pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.766640 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a51e914-e793-4f03-b58a-65628089e71a-webhook-cert\") pod \"metallb-operator-webhook-server-64dc64df49-qlh66\" (UID: \"1a51e914-e793-4f03-b58a-65628089e71a\") " pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.868479 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n74vr\" (UniqueName: \"kubernetes.io/projected/1a51e914-e793-4f03-b58a-65628089e71a-kube-api-access-n74vr\") pod \"metallb-operator-webhook-server-64dc64df49-qlh66\" (UID: \"1a51e914-e793-4f03-b58a-65628089e71a\") " pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.868579 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a51e914-e793-4f03-b58a-65628089e71a-webhook-cert\") pod \"metallb-operator-webhook-server-64dc64df49-qlh66\" (UID: \"1a51e914-e793-4f03-b58a-65628089e71a\") " pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.868637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a51e914-e793-4f03-b58a-65628089e71a-apiservice-cert\") pod \"metallb-operator-webhook-server-64dc64df49-qlh66\" (UID: \"1a51e914-e793-4f03-b58a-65628089e71a\") " pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.873599 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a51e914-e793-4f03-b58a-65628089e71a-apiservice-cert\") pod \"metallb-operator-webhook-server-64dc64df49-qlh66\" (UID: \"1a51e914-e793-4f03-b58a-65628089e71a\") " pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.873634 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a51e914-e793-4f03-b58a-65628089e71a-webhook-cert\") pod \"metallb-operator-webhook-server-64dc64df49-qlh66\" (UID: \"1a51e914-e793-4f03-b58a-65628089e71a\") " pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:53 crc kubenswrapper[4881]: I0126 12:53:53.882779 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n74vr\" (UniqueName: \"kubernetes.io/projected/1a51e914-e793-4f03-b58a-65628089e71a-kube-api-access-n74vr\") pod \"metallb-operator-webhook-server-64dc64df49-qlh66\" (UID: \"1a51e914-e793-4f03-b58a-65628089e71a\") " pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:54 crc kubenswrapper[4881]: I0126 12:53:54.053552 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh"] Jan 26 12:53:54 crc kubenswrapper[4881]: I0126 12:53:54.160774 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" event={"ID":"2483eb0f-5e2f-4df8-8385-4095077aa351","Type":"ContainerStarted","Data":"8b93d26da80979067b4cc301d701aaf5b111c9e387e90c96820ab59c99ca9237"} Jan 26 12:53:54 crc kubenswrapper[4881]: I0126 12:53:54.181552 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:53:54 crc kubenswrapper[4881]: I0126 12:53:54.434417 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66"] Jan 26 12:53:54 crc kubenswrapper[4881]: W0126 12:53:54.443903 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a51e914_e793_4f03_b58a_65628089e71a.slice/crio-23dec6b006bdd2cbae9ac890815643dc7db82549cfd9c87acf3ff44398f5bccc WatchSource:0}: Error finding container 23dec6b006bdd2cbae9ac890815643dc7db82549cfd9c87acf3ff44398f5bccc: Status 404 returned error can't find the container with id 23dec6b006bdd2cbae9ac890815643dc7db82549cfd9c87acf3ff44398f5bccc Jan 26 12:53:55 crc kubenswrapper[4881]: I0126 12:53:55.169825 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" event={"ID":"1a51e914-e793-4f03-b58a-65628089e71a","Type":"ContainerStarted","Data":"23dec6b006bdd2cbae9ac890815643dc7db82549cfd9c87acf3ff44398f5bccc"} Jan 26 12:53:57 crc kubenswrapper[4881]: I0126 12:53:57.198375 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" event={"ID":"2483eb0f-5e2f-4df8-8385-4095077aa351","Type":"ContainerStarted","Data":"39454acb088b6d50cb8d63b6a598bd777389f6e61cf289f31844ad72d6a08f18"} Jan 26 12:53:57 crc kubenswrapper[4881]: I0126 12:53:57.198915 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:53:57 crc kubenswrapper[4881]: I0126 12:53:57.232754 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" podStartSLOduration=1.3547342549999999 podStartE2EDuration="4.232717571s" podCreationTimestamp="2026-01-26 12:53:53 +0000 UTC" firstStartedPulling="2026-01-26 12:53:54.058041429 +0000 UTC m=+1106.537351495" lastFinishedPulling="2026-01-26 12:53:56.936024785 +0000 UTC m=+1109.415334811" observedRunningTime="2026-01-26 12:53:57.227907894 +0000 UTC m=+1109.707217920" watchObservedRunningTime="2026-01-26 12:53:57.232717571 +0000 UTC m=+1109.712027597" Jan 26 12:54:01 crc kubenswrapper[4881]: I0126 12:54:01.227330 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" event={"ID":"1a51e914-e793-4f03-b58a-65628089e71a","Type":"ContainerStarted","Data":"9b78ea22aa74da30967734c59b45c0b40d2939e66632a62af70da9eb6f81e78a"} Jan 26 12:54:01 crc kubenswrapper[4881]: I0126 12:54:01.228057 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:54:01 crc kubenswrapper[4881]: I0126 12:54:01.259650 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" podStartSLOduration=2.064461193 podStartE2EDuration="8.259618647s" podCreationTimestamp="2026-01-26 12:53:53 +0000 UTC" firstStartedPulling="2026-01-26 12:53:54.449220519 +0000 UTC m=+1106.928530555" lastFinishedPulling="2026-01-26 12:54:00.644377973 +0000 UTC m=+1113.123688009" observedRunningTime="2026-01-26 12:54:01.256617204 +0000 UTC m=+1113.735927270" watchObservedRunningTime="2026-01-26 12:54:01.259618647 +0000 UTC m=+1113.738928703" Jan 26 12:54:14 crc kubenswrapper[4881]: I0126 12:54:14.188485 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" Jan 26 12:54:33 crc kubenswrapper[4881]: I0126 12:54:33.645716 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-79649b4ffb-kpsrh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.462592 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-8rjxh"] Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.466198 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.469863 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8"] Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.470679 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.478860 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.480565 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.481064 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.481231 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-4s96z" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.508104 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8"] Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.609882 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kg9bx"] Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.610740 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.617444 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.620827 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.621016 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pxqxx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.621168 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.626094 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-8vg98"] Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.632657 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.632783 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-frr-conf\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.632828 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03597099-e9a6-4f59-9f54-700638dcf570-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gmnh8\" (UID: \"03597099-e9a6-4f59-9f54-700638dcf570\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.632870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-frr-sockets\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.632890 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-reloader\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.632912 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-frr-startup\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.632931 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-metrics\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.632959 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhrnw\" (UniqueName: \"kubernetes.io/projected/03597099-e9a6-4f59-9f54-700638dcf570-kube-api-access-rhrnw\") pod \"frr-k8s-webhook-server-7df86c4f6c-gmnh8\" (UID: \"03597099-e9a6-4f59-9f54-700638dcf570\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.632995 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjsnl\" (UniqueName: \"kubernetes.io/projected/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-kube-api-access-gjsnl\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.633033 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-metrics-certs\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.635417 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8vg98"] Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.635648 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901f2a44-aecd-4a72-8802-b24d3bb902af-metrics-certs\") pod \"controller-6968d8fdc4-8vg98\" (UID: \"901f2a44-aecd-4a72-8802-b24d3bb902af\") " pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734223 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjsnl\" (UniqueName: \"kubernetes.io/projected/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-kube-api-access-gjsnl\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734255 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9f6w\" (UniqueName: \"kubernetes.io/projected/0323a529-06f7-4ee1-ac63-e9226b67ae3a-kube-api-access-d9f6w\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734296 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-metrics-certs\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734320 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/901f2a44-aecd-4a72-8802-b24d3bb902af-cert\") pod \"controller-6968d8fdc4-8vg98\" (UID: \"901f2a44-aecd-4a72-8802-b24d3bb902af\") " pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734350 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0323a529-06f7-4ee1-ac63-e9226b67ae3a-metallb-excludel2\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734373 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-metrics-certs\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734498 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-frr-conf\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734572 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03597099-e9a6-4f59-9f54-700638dcf570-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gmnh8\" (UID: \"03597099-e9a6-4f59-9f54-700638dcf570\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734610 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpttn\" (UniqueName: \"kubernetes.io/projected/901f2a44-aecd-4a72-8802-b24d3bb902af-kube-api-access-zpttn\") pod \"controller-6968d8fdc4-8vg98\" (UID: \"901f2a44-aecd-4a72-8802-b24d3bb902af\") " pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734696 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-frr-sockets\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734721 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-reloader\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734749 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-frr-startup\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734770 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-metrics\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734812 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-memberlist\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734840 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhrnw\" (UniqueName: \"kubernetes.io/projected/03597099-e9a6-4f59-9f54-700638dcf570-kube-api-access-rhrnw\") pod \"frr-k8s-webhook-server-7df86c4f6c-gmnh8\" (UID: \"03597099-e9a6-4f59-9f54-700638dcf570\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.734992 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-frr-conf\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.735080 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-frr-sockets\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.735262 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-reloader\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.735352 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-metrics\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.735732 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-frr-startup\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.740010 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03597099-e9a6-4f59-9f54-700638dcf570-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gmnh8\" (UID: \"03597099-e9a6-4f59-9f54-700638dcf570\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.745996 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-metrics-certs\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.752414 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhrnw\" (UniqueName: \"kubernetes.io/projected/03597099-e9a6-4f59-9f54-700638dcf570-kube-api-access-rhrnw\") pod \"frr-k8s-webhook-server-7df86c4f6c-gmnh8\" (UID: \"03597099-e9a6-4f59-9f54-700638dcf570\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.753488 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjsnl\" (UniqueName: \"kubernetes.io/projected/769ed86e-fb54-4e5a-a315-2cc85e6b0f3e-kube-api-access-gjsnl\") pod \"frr-k8s-8rjxh\" (UID: \"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e\") " pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.797183 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.804845 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.836682 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-memberlist\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.836748 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901f2a44-aecd-4a72-8802-b24d3bb902af-metrics-certs\") pod \"controller-6968d8fdc4-8vg98\" (UID: \"901f2a44-aecd-4a72-8802-b24d3bb902af\") " pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.836782 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9f6w\" (UniqueName: \"kubernetes.io/projected/0323a529-06f7-4ee1-ac63-e9226b67ae3a-kube-api-access-d9f6w\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.836824 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/901f2a44-aecd-4a72-8802-b24d3bb902af-cert\") pod \"controller-6968d8fdc4-8vg98\" (UID: \"901f2a44-aecd-4a72-8802-b24d3bb902af\") " pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.836849 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0323a529-06f7-4ee1-ac63-e9226b67ae3a-metallb-excludel2\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.836870 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-metrics-certs\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.836904 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpttn\" (UniqueName: \"kubernetes.io/projected/901f2a44-aecd-4a72-8802-b24d3bb902af-kube-api-access-zpttn\") pod \"controller-6968d8fdc4-8vg98\" (UID: \"901f2a44-aecd-4a72-8802-b24d3bb902af\") " pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:34 crc kubenswrapper[4881]: E0126 12:54:34.837024 4881 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 12:54:34 crc kubenswrapper[4881]: E0126 12:54:34.837189 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-memberlist podName:0323a529-06f7-4ee1-ac63-e9226b67ae3a nodeName:}" failed. No retries permitted until 2026-01-26 12:54:35.33714911 +0000 UTC m=+1147.816459196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-memberlist") pod "speaker-kg9bx" (UID: "0323a529-06f7-4ee1-ac63-e9226b67ae3a") : secret "metallb-memberlist" not found Jan 26 12:54:34 crc kubenswrapper[4881]: E0126 12:54:34.837033 4881 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 26 12:54:34 crc kubenswrapper[4881]: E0126 12:54:34.837666 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/901f2a44-aecd-4a72-8802-b24d3bb902af-metrics-certs podName:901f2a44-aecd-4a72-8802-b24d3bb902af nodeName:}" failed. No retries permitted until 2026-01-26 12:54:35.337644851 +0000 UTC m=+1147.816954967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/901f2a44-aecd-4a72-8802-b24d3bb902af-metrics-certs") pod "controller-6968d8fdc4-8vg98" (UID: "901f2a44-aecd-4a72-8802-b24d3bb902af") : secret "controller-certs-secret" not found Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.838034 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0323a529-06f7-4ee1-ac63-e9226b67ae3a-metallb-excludel2\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.841224 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.841493 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-metrics-certs\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.852443 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/901f2a44-aecd-4a72-8802-b24d3bb902af-cert\") pod \"controller-6968d8fdc4-8vg98\" (UID: \"901f2a44-aecd-4a72-8802-b24d3bb902af\") " pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.861380 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpttn\" (UniqueName: \"kubernetes.io/projected/901f2a44-aecd-4a72-8802-b24d3bb902af-kube-api-access-zpttn\") pod \"controller-6968d8fdc4-8vg98\" (UID: \"901f2a44-aecd-4a72-8802-b24d3bb902af\") " pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:34 crc kubenswrapper[4881]: I0126 12:54:34.865959 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9f6w\" (UniqueName: \"kubernetes.io/projected/0323a529-06f7-4ee1-ac63-e9226b67ae3a-kube-api-access-d9f6w\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:35 crc kubenswrapper[4881]: I0126 12:54:35.224597 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8"] Jan 26 12:54:35 crc kubenswrapper[4881]: W0126 12:54:35.229606 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03597099_e9a6_4f59_9f54_700638dcf570.slice/crio-d2a7e8867421540771344e73a36c427ea72ca582662ff95ef589ea1e99fd84e2 WatchSource:0}: Error finding container d2a7e8867421540771344e73a36c427ea72ca582662ff95ef589ea1e99fd84e2: Status 404 returned error can't find the container with id d2a7e8867421540771344e73a36c427ea72ca582662ff95ef589ea1e99fd84e2 Jan 26 12:54:35 crc kubenswrapper[4881]: I0126 12:54:35.343709 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-memberlist\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:35 crc kubenswrapper[4881]: I0126 12:54:35.343764 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901f2a44-aecd-4a72-8802-b24d3bb902af-metrics-certs\") pod \"controller-6968d8fdc4-8vg98\" (UID: \"901f2a44-aecd-4a72-8802-b24d3bb902af\") " pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:35 crc kubenswrapper[4881]: E0126 12:54:35.343877 4881 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 12:54:35 crc kubenswrapper[4881]: E0126 12:54:35.343945 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-memberlist podName:0323a529-06f7-4ee1-ac63-e9226b67ae3a nodeName:}" failed. No retries permitted until 2026-01-26 12:54:36.343927529 +0000 UTC m=+1148.823237555 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-memberlist") pod "speaker-kg9bx" (UID: "0323a529-06f7-4ee1-ac63-e9226b67ae3a") : secret "metallb-memberlist" not found Jan 26 12:54:35 crc kubenswrapper[4881]: I0126 12:54:35.347949 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901f2a44-aecd-4a72-8802-b24d3bb902af-metrics-certs\") pod \"controller-6968d8fdc4-8vg98\" (UID: \"901f2a44-aecd-4a72-8802-b24d3bb902af\") " pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:35 crc kubenswrapper[4881]: I0126 12:54:35.466397 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rjxh" event={"ID":"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e","Type":"ContainerStarted","Data":"b07b294e74b32bbc75f7d9ef5ba9f205de98b8e2877e5b1874aa83d8567a0e81"} Jan 26 12:54:35 crc kubenswrapper[4881]: I0126 12:54:35.468153 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" event={"ID":"03597099-e9a6-4f59-9f54-700638dcf570","Type":"ContainerStarted","Data":"d2a7e8867421540771344e73a36c427ea72ca582662ff95ef589ea1e99fd84e2"} Jan 26 12:54:35 crc kubenswrapper[4881]: I0126 12:54:35.549322 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:36 crc kubenswrapper[4881]: I0126 12:54:36.095068 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8vg98"] Jan 26 12:54:36 crc kubenswrapper[4881]: W0126 12:54:36.101119 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod901f2a44_aecd_4a72_8802_b24d3bb902af.slice/crio-af1bd529062d9a4ed2abf8d794035a4287a8a63cd58714e1d3980d1c711912a8 WatchSource:0}: Error finding container af1bd529062d9a4ed2abf8d794035a4287a8a63cd58714e1d3980d1c711912a8: Status 404 returned error can't find the container with id af1bd529062d9a4ed2abf8d794035a4287a8a63cd58714e1d3980d1c711912a8 Jan 26 12:54:36 crc kubenswrapper[4881]: I0126 12:54:36.358109 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-memberlist\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:36 crc kubenswrapper[4881]: I0126 12:54:36.363336 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0323a529-06f7-4ee1-ac63-e9226b67ae3a-memberlist\") pod \"speaker-kg9bx\" (UID: \"0323a529-06f7-4ee1-ac63-e9226b67ae3a\") " pod="metallb-system/speaker-kg9bx" Jan 26 12:54:36 crc kubenswrapper[4881]: I0126 12:54:36.431870 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kg9bx" Jan 26 12:54:36 crc kubenswrapper[4881]: W0126 12:54:36.454740 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0323a529_06f7_4ee1_ac63_e9226b67ae3a.slice/crio-b1df157af5b4caadaa4f24e97c32f21bc8bb0be2d6bdaf9e7cb1ca274df1f3b8 WatchSource:0}: Error finding container b1df157af5b4caadaa4f24e97c32f21bc8bb0be2d6bdaf9e7cb1ca274df1f3b8: Status 404 returned error can't find the container with id b1df157af5b4caadaa4f24e97c32f21bc8bb0be2d6bdaf9e7cb1ca274df1f3b8 Jan 26 12:54:36 crc kubenswrapper[4881]: I0126 12:54:36.477747 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8vg98" event={"ID":"901f2a44-aecd-4a72-8802-b24d3bb902af","Type":"ContainerStarted","Data":"dbd9c9db669eb808f405736f5e516885e01c72a1b357a19ccd24105fc361df8c"} Jan 26 12:54:36 crc kubenswrapper[4881]: I0126 12:54:36.477798 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8vg98" event={"ID":"901f2a44-aecd-4a72-8802-b24d3bb902af","Type":"ContainerStarted","Data":"7d8e5cb60a7c60273748e2e56a81f941b15a39630935e82333e8102e76064725"} Jan 26 12:54:36 crc kubenswrapper[4881]: I0126 12:54:36.477813 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8vg98" event={"ID":"901f2a44-aecd-4a72-8802-b24d3bb902af","Type":"ContainerStarted","Data":"af1bd529062d9a4ed2abf8d794035a4287a8a63cd58714e1d3980d1c711912a8"} Jan 26 12:54:36 crc kubenswrapper[4881]: I0126 12:54:36.477924 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:36 crc kubenswrapper[4881]: I0126 12:54:36.479107 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kg9bx" event={"ID":"0323a529-06f7-4ee1-ac63-e9226b67ae3a","Type":"ContainerStarted","Data":"b1df157af5b4caadaa4f24e97c32f21bc8bb0be2d6bdaf9e7cb1ca274df1f3b8"} Jan 26 12:54:36 crc kubenswrapper[4881]: I0126 12:54:36.493591 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-8vg98" podStartSLOduration=2.493573065 podStartE2EDuration="2.493573065s" podCreationTimestamp="2026-01-26 12:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:54:36.493104434 +0000 UTC m=+1148.972414460" watchObservedRunningTime="2026-01-26 12:54:36.493573065 +0000 UTC m=+1148.972883101" Jan 26 12:54:37 crc kubenswrapper[4881]: I0126 12:54:37.494273 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kg9bx" event={"ID":"0323a529-06f7-4ee1-ac63-e9226b67ae3a","Type":"ContainerStarted","Data":"fb7afca5e554058cc9aca9fc91979b96dd63a14a3a738b088933ec85673d0eb0"} Jan 26 12:54:37 crc kubenswrapper[4881]: I0126 12:54:37.494306 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kg9bx" event={"ID":"0323a529-06f7-4ee1-ac63-e9226b67ae3a","Type":"ContainerStarted","Data":"cd121cf059d720ac76ccf671d5258732359fddc49462f54352251b82c2dfb565"} Jan 26 12:54:37 crc kubenswrapper[4881]: I0126 12:54:37.494327 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kg9bx" Jan 26 12:54:37 crc kubenswrapper[4881]: I0126 12:54:37.513735 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kg9bx" podStartSLOduration=3.513715424 podStartE2EDuration="3.513715424s" podCreationTimestamp="2026-01-26 12:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:54:37.509558783 +0000 UTC m=+1149.988868809" watchObservedRunningTime="2026-01-26 12:54:37.513715424 +0000 UTC m=+1149.993025450" Jan 26 12:54:43 crc kubenswrapper[4881]: I0126 12:54:43.558428 4881 generic.go:334] "Generic (PLEG): container finished" podID="769ed86e-fb54-4e5a-a315-2cc85e6b0f3e" containerID="7551d900853699e034daf74157ccfa29b0fb840755bcd0b8dbaabc2d7b48ea5b" exitCode=0 Jan 26 12:54:43 crc kubenswrapper[4881]: I0126 12:54:43.558913 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rjxh" event={"ID":"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e","Type":"ContainerDied","Data":"7551d900853699e034daf74157ccfa29b0fb840755bcd0b8dbaabc2d7b48ea5b"} Jan 26 12:54:43 crc kubenswrapper[4881]: I0126 12:54:43.572655 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" event={"ID":"03597099-e9a6-4f59-9f54-700638dcf570","Type":"ContainerStarted","Data":"be12fbbf38058ea42eeb3fe6e97073d7577924174e962a861e9aeffaca754cf3"} Jan 26 12:54:43 crc kubenswrapper[4881]: I0126 12:54:43.639849 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" podStartSLOduration=2.340312689 podStartE2EDuration="9.639822355s" podCreationTimestamp="2026-01-26 12:54:34 +0000 UTC" firstStartedPulling="2026-01-26 12:54:35.232126732 +0000 UTC m=+1147.711436758" lastFinishedPulling="2026-01-26 12:54:42.531636388 +0000 UTC m=+1155.010946424" observedRunningTime="2026-01-26 12:54:43.626377086 +0000 UTC m=+1156.105687122" watchObservedRunningTime="2026-01-26 12:54:43.639822355 +0000 UTC m=+1156.119132421" Jan 26 12:54:44 crc kubenswrapper[4881]: I0126 12:54:44.584776 4881 generic.go:334] "Generic (PLEG): container finished" podID="769ed86e-fb54-4e5a-a315-2cc85e6b0f3e" containerID="bb79db52381461deec3b08ed25a2cc4100320788bda758418772580624496194" exitCode=0 Jan 26 12:54:44 crc kubenswrapper[4881]: I0126 12:54:44.584896 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rjxh" event={"ID":"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e","Type":"ContainerDied","Data":"bb79db52381461deec3b08ed25a2cc4100320788bda758418772580624496194"} Jan 26 12:54:44 crc kubenswrapper[4881]: I0126 12:54:44.585347 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" Jan 26 12:54:45 crc kubenswrapper[4881]: I0126 12:54:45.596585 4881 generic.go:334] "Generic (PLEG): container finished" podID="769ed86e-fb54-4e5a-a315-2cc85e6b0f3e" containerID="87419a327f841b0cba309cfa036113159ad2bb0d7551338a7351a4b21a6d0ddb" exitCode=0 Jan 26 12:54:45 crc kubenswrapper[4881]: I0126 12:54:45.596694 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rjxh" event={"ID":"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e","Type":"ContainerDied","Data":"87419a327f841b0cba309cfa036113159ad2bb0d7551338a7351a4b21a6d0ddb"} Jan 26 12:54:46 crc kubenswrapper[4881]: I0126 12:54:46.435689 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kg9bx" Jan 26 12:54:46 crc kubenswrapper[4881]: I0126 12:54:46.606642 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rjxh" event={"ID":"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e","Type":"ContainerStarted","Data":"0b46bb3fd764cda17f1fd1718fc879f5bd98b5541c98becb2aece8ec72e6e71d"} Jan 26 12:54:46 crc kubenswrapper[4881]: I0126 12:54:46.606683 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rjxh" event={"ID":"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e","Type":"ContainerStarted","Data":"d9cc8836df137f3ed889766e9130682b50106c7083bcf6e7de86325ce3fb1d19"} Jan 26 12:54:46 crc kubenswrapper[4881]: I0126 12:54:46.606696 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rjxh" event={"ID":"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e","Type":"ContainerStarted","Data":"0b72be504716aba36a051afd6b766196f03856767f2c7e9c91268a7e666a5ffa"} Jan 26 12:54:46 crc kubenswrapper[4881]: I0126 12:54:46.606708 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rjxh" event={"ID":"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e","Type":"ContainerStarted","Data":"143fea0973603311ae1eba162d9c4e4afa81a2b132648bc96e696de3e686b995"} Jan 26 12:54:47 crc kubenswrapper[4881]: I0126 12:54:47.621255 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rjxh" event={"ID":"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e","Type":"ContainerStarted","Data":"01a5326a45b3b60505af71ad4f55e5dfc77d9507d56663e67e42081b26f818b5"} Jan 26 12:54:47 crc kubenswrapper[4881]: I0126 12:54:47.621733 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:47 crc kubenswrapper[4881]: I0126 12:54:47.621763 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8rjxh" event={"ID":"769ed86e-fb54-4e5a-a315-2cc85e6b0f3e","Type":"ContainerStarted","Data":"77cd1cc7d6a62273a6382bed2f1aaab2d615364758d9d503c446ea7b62756151"} Jan 26 12:54:47 crc kubenswrapper[4881]: I0126 12:54:47.659380 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-8rjxh" podStartSLOduration=6.166609523 podStartE2EDuration="13.659355191s" podCreationTimestamp="2026-01-26 12:54:34 +0000 UTC" firstStartedPulling="2026-01-26 12:54:35.034247097 +0000 UTC m=+1147.513557123" lastFinishedPulling="2026-01-26 12:54:42.526992735 +0000 UTC m=+1155.006302791" observedRunningTime="2026-01-26 12:54:47.655342103 +0000 UTC m=+1160.134652169" watchObservedRunningTime="2026-01-26 12:54:47.659355191 +0000 UTC m=+1160.138665247" Jan 26 12:54:49 crc kubenswrapper[4881]: I0126 12:54:49.798338 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:49 crc kubenswrapper[4881]: I0126 12:54:49.838291 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.093885 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wmbzq"] Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.095889 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wmbzq" Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.101039 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.101274 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jrz98" Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.102088 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.109614 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wmbzq"] Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.220177 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bj9\" (UniqueName: \"kubernetes.io/projected/51923b46-00ba-4a5e-984d-b1f8febec058-kube-api-access-q8bj9\") pod \"openstack-operator-index-wmbzq\" (UID: \"51923b46-00ba-4a5e-984d-b1f8febec058\") " pod="openstack-operators/openstack-operator-index-wmbzq" Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.321194 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bj9\" (UniqueName: \"kubernetes.io/projected/51923b46-00ba-4a5e-984d-b1f8febec058-kube-api-access-q8bj9\") pod \"openstack-operator-index-wmbzq\" (UID: \"51923b46-00ba-4a5e-984d-b1f8febec058\") " pod="openstack-operators/openstack-operator-index-wmbzq" Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.348468 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bj9\" (UniqueName: \"kubernetes.io/projected/51923b46-00ba-4a5e-984d-b1f8febec058-kube-api-access-q8bj9\") pod \"openstack-operator-index-wmbzq\" (UID: \"51923b46-00ba-4a5e-984d-b1f8febec058\") " pod="openstack-operators/openstack-operator-index-wmbzq" Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.433215 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wmbzq" Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.933451 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wmbzq"] Jan 26 12:54:53 crc kubenswrapper[4881]: I0126 12:54:53.947657 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 12:54:54 crc kubenswrapper[4881]: I0126 12:54:54.678927 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wmbzq" event={"ID":"51923b46-00ba-4a5e-984d-b1f8febec058","Type":"ContainerStarted","Data":"8bf6870b527fbdc1b0f66206cc0256a397f2227038ca5839fdfefc8c3d5e6972"} Jan 26 12:54:54 crc kubenswrapper[4881]: I0126 12:54:54.810886 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gmnh8" Jan 26 12:54:55 crc kubenswrapper[4881]: I0126 12:54:55.554096 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8vg98" Jan 26 12:54:56 crc kubenswrapper[4881]: I0126 12:54:56.696980 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wmbzq" event={"ID":"51923b46-00ba-4a5e-984d-b1f8febec058","Type":"ContainerStarted","Data":"137644be2e37126fe7e4f3bb36d7593c2687f153af0d9835e4c95eb1ff846a89"} Jan 26 12:54:56 crc kubenswrapper[4881]: I0126 12:54:56.718265 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wmbzq" podStartSLOduration=1.76989759 podStartE2EDuration="3.718243305s" podCreationTimestamp="2026-01-26 12:54:53 +0000 UTC" firstStartedPulling="2026-01-26 12:54:53.947392491 +0000 UTC m=+1166.426702527" lastFinishedPulling="2026-01-26 12:54:55.895738176 +0000 UTC m=+1168.375048242" observedRunningTime="2026-01-26 12:54:56.70861779 +0000 UTC m=+1169.187927866" watchObservedRunningTime="2026-01-26 12:54:56.718243305 +0000 UTC m=+1169.197553331" Jan 26 12:55:03 crc kubenswrapper[4881]: I0126 12:55:03.433426 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wmbzq" Jan 26 12:55:03 crc kubenswrapper[4881]: I0126 12:55:03.434386 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wmbzq" Jan 26 12:55:03 crc kubenswrapper[4881]: I0126 12:55:03.484289 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wmbzq" Jan 26 12:55:03 crc kubenswrapper[4881]: I0126 12:55:03.802796 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wmbzq" Jan 26 12:55:04 crc kubenswrapper[4881]: I0126 12:55:04.805155 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-8rjxh" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.360398 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b"] Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.364744 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.370068 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dz245" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.405100 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b"] Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.415441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r85t\" (UniqueName: \"kubernetes.io/projected/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-kube-api-access-8r85t\") pod \"ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.415643 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-util\") pod \"ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.415705 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-bundle\") pod \"ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.517444 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-util\") pod \"ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.517504 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-bundle\") pod \"ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.517597 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r85t\" (UniqueName: \"kubernetes.io/projected/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-kube-api-access-8r85t\") pod \"ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.518232 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-util\") pod \"ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.518251 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-bundle\") pod \"ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.534629 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r85t\" (UniqueName: \"kubernetes.io/projected/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-kube-api-access-8r85t\") pod \"ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:06 crc kubenswrapper[4881]: I0126 12:55:06.706324 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:07 crc kubenswrapper[4881]: I0126 12:55:07.194796 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b"] Jan 26 12:55:07 crc kubenswrapper[4881]: W0126 12:55:07.203104 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad3804a0_2b66_4f69_a8f4_6f8b27abea8f.slice/crio-b9cd8e71055b23d206332d972a72913f2bddf28fae9d3f91198851b092ec3bf0 WatchSource:0}: Error finding container b9cd8e71055b23d206332d972a72913f2bddf28fae9d3f91198851b092ec3bf0: Status 404 returned error can't find the container with id b9cd8e71055b23d206332d972a72913f2bddf28fae9d3f91198851b092ec3bf0 Jan 26 12:55:07 crc kubenswrapper[4881]: I0126 12:55:07.787732 4881 generic.go:334] "Generic (PLEG): container finished" podID="ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" containerID="1354a08f950de1b75bb74e64f0fa51603a81c137ccaa620604356bbdb01067c8" exitCode=0 Jan 26 12:55:07 crc kubenswrapper[4881]: I0126 12:55:07.788096 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" event={"ID":"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f","Type":"ContainerDied","Data":"1354a08f950de1b75bb74e64f0fa51603a81c137ccaa620604356bbdb01067c8"} Jan 26 12:55:07 crc kubenswrapper[4881]: I0126 12:55:07.788147 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" event={"ID":"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f","Type":"ContainerStarted","Data":"b9cd8e71055b23d206332d972a72913f2bddf28fae9d3f91198851b092ec3bf0"} Jan 26 12:55:08 crc kubenswrapper[4881]: I0126 12:55:08.802657 4881 generic.go:334] "Generic (PLEG): container finished" podID="ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" containerID="6a63d3f39061a8eec9f5ff43331ca3c88e4c79957689bf0ec54d9425ecacc8cd" exitCode=0 Jan 26 12:55:08 crc kubenswrapper[4881]: I0126 12:55:08.802847 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" event={"ID":"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f","Type":"ContainerDied","Data":"6a63d3f39061a8eec9f5ff43331ca3c88e4c79957689bf0ec54d9425ecacc8cd"} Jan 26 12:55:09 crc kubenswrapper[4881]: I0126 12:55:09.812576 4881 generic.go:334] "Generic (PLEG): container finished" podID="ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" containerID="f587ceb5a85b532dbfcff1265eee0cd212e9c178f2468c6446816b07ff2939a8" exitCode=0 Jan 26 12:55:09 crc kubenswrapper[4881]: I0126 12:55:09.812659 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" event={"ID":"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f","Type":"ContainerDied","Data":"f587ceb5a85b532dbfcff1265eee0cd212e9c178f2468c6446816b07ff2939a8"} Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.203093 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.292737 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r85t\" (UniqueName: \"kubernetes.io/projected/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-kube-api-access-8r85t\") pod \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.292974 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-bundle\") pod \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.293072 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-util\") pod \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\" (UID: \"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f\") " Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.294349 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-bundle" (OuterVolumeSpecName: "bundle") pod "ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" (UID: "ad3804a0-2b66-4f69-a8f4-6f8b27abea8f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.302662 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-kube-api-access-8r85t" (OuterVolumeSpecName: "kube-api-access-8r85t") pod "ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" (UID: "ad3804a0-2b66-4f69-a8f4-6f8b27abea8f"). InnerVolumeSpecName "kube-api-access-8r85t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.340253 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-util" (OuterVolumeSpecName: "util") pod "ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" (UID: "ad3804a0-2b66-4f69-a8f4-6f8b27abea8f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.395166 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.395814 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-util\") on node \"crc\" DevicePath \"\"" Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.395979 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r85t\" (UniqueName: \"kubernetes.io/projected/ad3804a0-2b66-4f69-a8f4-6f8b27abea8f-kube-api-access-8r85t\") on node \"crc\" DevicePath \"\"" Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.833116 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" event={"ID":"ad3804a0-2b66-4f69-a8f4-6f8b27abea8f","Type":"ContainerDied","Data":"b9cd8e71055b23d206332d972a72913f2bddf28fae9d3f91198851b092ec3bf0"} Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.833172 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9cd8e71055b23d206332d972a72913f2bddf28fae9d3f91198851b092ec3bf0" Jan 26 12:55:11 crc kubenswrapper[4881]: I0126 12:55:11.833178 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b" Jan 26 12:55:18 crc kubenswrapper[4881]: I0126 12:55:18.876304 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p"] Jan 26 12:55:18 crc kubenswrapper[4881]: E0126 12:55:18.877358 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" containerName="pull" Jan 26 12:55:18 crc kubenswrapper[4881]: I0126 12:55:18.877374 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" containerName="pull" Jan 26 12:55:18 crc kubenswrapper[4881]: E0126 12:55:18.877396 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" containerName="extract" Jan 26 12:55:18 crc kubenswrapper[4881]: I0126 12:55:18.877404 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" containerName="extract" Jan 26 12:55:18 crc kubenswrapper[4881]: E0126 12:55:18.877422 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" containerName="util" Jan 26 12:55:18 crc kubenswrapper[4881]: I0126 12:55:18.877431 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" containerName="util" Jan 26 12:55:18 crc kubenswrapper[4881]: I0126 12:55:18.877590 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3804a0-2b66-4f69-a8f4-6f8b27abea8f" containerName="extract" Jan 26 12:55:18 crc kubenswrapper[4881]: I0126 12:55:18.878127 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p" Jan 26 12:55:18 crc kubenswrapper[4881]: I0126 12:55:18.881194 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-lsszx" Jan 26 12:55:18 crc kubenswrapper[4881]: I0126 12:55:18.938929 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m2pj\" (UniqueName: \"kubernetes.io/projected/ba9ac6c1-1e58-4306-b8fd-c56dca9f4ec4-kube-api-access-4m2pj\") pod \"openstack-operator-controller-init-6fbc4c9d5c-k7d5p\" (UID: \"ba9ac6c1-1e58-4306-b8fd-c56dca9f4ec4\") " pod="openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p" Jan 26 12:55:18 crc kubenswrapper[4881]: I0126 12:55:18.957547 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p"] Jan 26 12:55:19 crc kubenswrapper[4881]: I0126 12:55:19.040674 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m2pj\" (UniqueName: \"kubernetes.io/projected/ba9ac6c1-1e58-4306-b8fd-c56dca9f4ec4-kube-api-access-4m2pj\") pod \"openstack-operator-controller-init-6fbc4c9d5c-k7d5p\" (UID: \"ba9ac6c1-1e58-4306-b8fd-c56dca9f4ec4\") " pod="openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p" Jan 26 12:55:19 crc kubenswrapper[4881]: I0126 12:55:19.058197 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m2pj\" (UniqueName: \"kubernetes.io/projected/ba9ac6c1-1e58-4306-b8fd-c56dca9f4ec4-kube-api-access-4m2pj\") pod \"openstack-operator-controller-init-6fbc4c9d5c-k7d5p\" (UID: \"ba9ac6c1-1e58-4306-b8fd-c56dca9f4ec4\") " pod="openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p" Jan 26 12:55:19 crc kubenswrapper[4881]: I0126 12:55:19.197100 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p" Jan 26 12:55:19 crc kubenswrapper[4881]: I0126 12:55:19.381536 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p"] Jan 26 12:55:19 crc kubenswrapper[4881]: I0126 12:55:19.900048 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p" event={"ID":"ba9ac6c1-1e58-4306-b8fd-c56dca9f4ec4","Type":"ContainerStarted","Data":"4cb1292f81748df96bf2e5a69c533d86229d9f59be6e62d26f98bdb0a4bb2ee2"} Jan 26 12:55:23 crc kubenswrapper[4881]: I0126 12:55:23.934083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p" event={"ID":"ba9ac6c1-1e58-4306-b8fd-c56dca9f4ec4","Type":"ContainerStarted","Data":"ac2228d6e4788bef1532e7930410d320444a80630aad9a1c84ff8db4514cf76b"} Jan 26 12:55:23 crc kubenswrapper[4881]: I0126 12:55:23.934785 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p" Jan 26 12:55:23 crc kubenswrapper[4881]: I0126 12:55:23.966755 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p" podStartSLOduration=2.065543377 podStartE2EDuration="5.966735067s" podCreationTimestamp="2026-01-26 12:55:18 +0000 UTC" firstStartedPulling="2026-01-26 12:55:19.385243196 +0000 UTC m=+1191.864553232" lastFinishedPulling="2026-01-26 12:55:23.286434886 +0000 UTC m=+1195.765744922" observedRunningTime="2026-01-26 12:55:23.965959418 +0000 UTC m=+1196.445269454" watchObservedRunningTime="2026-01-26 12:55:23.966735067 +0000 UTC m=+1196.446045123" Jan 26 12:55:29 crc kubenswrapper[4881]: I0126 12:55:29.199170 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6fbc4c9d5c-k7d5p" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.425540 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.426852 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.429404 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.429487 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-tw8dd" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.430464 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.433109 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ps98b" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.438865 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.439699 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.443244 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fw5jj" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.448023 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.461864 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.468386 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.469876 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nznr5" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.479883 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.491495 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.492717 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.498337 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-d7rcb" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.501906 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.502638 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.508501 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wsqjt" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.531832 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.542753 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.543560 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.545638 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zk5bt" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.548247 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.549210 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdvx2\" (UniqueName: \"kubernetes.io/projected/c5cecd8b-813f-4bde-be28-371c54bcdfb9-kube-api-access-gdvx2\") pod \"designate-operator-controller-manager-b45d7bf98-lgljg\" (UID: \"c5cecd8b-813f-4bde-be28-371c54bcdfb9\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.549262 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28z54\" (UniqueName: \"kubernetes.io/projected/4508aa9d-2a89-4976-bd36-dc918900371e-kube-api-access-28z54\") pod \"glance-operator-controller-manager-78fdd796fd-sgnd2\" (UID: \"4508aa9d-2a89-4976-bd36-dc918900371e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.549295 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trwxd\" (UniqueName: \"kubernetes.io/projected/d5375dff-af5c-4de8-b52b-acf18edc4fb2-kube-api-access-trwxd\") pod \"cinder-operator-controller-manager-7478f7dbf9-w2n48\" (UID: \"d5375dff-af5c-4de8-b52b-acf18edc4fb2\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.549319 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvgp5\" (UniqueName: \"kubernetes.io/projected/e8b8ff3a-c099-4192-b061-33ff69fd2884-kube-api-access-kvgp5\") pod \"barbican-operator-controller-manager-7f86f8796f-hs8xc\" (UID: \"e8b8ff3a-c099-4192-b061-33ff69fd2884\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.569413 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.573932 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.574768 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.609752 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-p64mr" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.611533 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.632380 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.652013 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk4jl\" (UniqueName: \"kubernetes.io/projected/d998c88b-6b01-4e5f-bbab-a5aaee1a945b-kube-api-access-tk4jl\") pod \"horizon-operator-controller-manager-77d5c5b54f-rww6v\" (UID: \"d998c88b-6b01-4e5f-bbab-a5aaee1a945b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.652262 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvgp5\" (UniqueName: \"kubernetes.io/projected/e8b8ff3a-c099-4192-b061-33ff69fd2884-kube-api-access-kvgp5\") pod \"barbican-operator-controller-manager-7f86f8796f-hs8xc\" (UID: \"e8b8ff3a-c099-4192-b061-33ff69fd2884\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.652429 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgfq6\" (UniqueName: \"kubernetes.io/projected/0e17a034-e3c9-434a-838f-8bfae6d010dd-kube-api-access-kgfq6\") pod \"heat-operator-controller-manager-594c8c9d5d-v95fq\" (UID: \"0e17a034-e3c9-434a-838f-8bfae6d010dd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.652779 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdvx2\" (UniqueName: \"kubernetes.io/projected/c5cecd8b-813f-4bde-be28-371c54bcdfb9-kube-api-access-gdvx2\") pod \"designate-operator-controller-manager-b45d7bf98-lgljg\" (UID: \"c5cecd8b-813f-4bde-be28-371c54bcdfb9\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.652831 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brpd6\" (UniqueName: \"kubernetes.io/projected/cbafad55-0cc5-42d6-b721-b1f4e158251f-kube-api-access-brpd6\") pod \"ironic-operator-controller-manager-598f7747c9-flv4v\" (UID: \"cbafad55-0cc5-42d6-b721-b1f4e158251f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.652863 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28z54\" (UniqueName: \"kubernetes.io/projected/4508aa9d-2a89-4976-bd36-dc918900371e-kube-api-access-28z54\") pod \"glance-operator-controller-manager-78fdd796fd-sgnd2\" (UID: \"4508aa9d-2a89-4976-bd36-dc918900371e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.652894 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg2p5\" (UniqueName: \"kubernetes.io/projected/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-kube-api-access-hg2p5\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.652916 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.653506 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trwxd\" (UniqueName: \"kubernetes.io/projected/d5375dff-af5c-4de8-b52b-acf18edc4fb2-kube-api-access-trwxd\") pod \"cinder-operator-controller-manager-7478f7dbf9-w2n48\" (UID: \"d5375dff-af5c-4de8-b52b-acf18edc4fb2\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.672623 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.678334 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.679226 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.681227 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdvx2\" (UniqueName: \"kubernetes.io/projected/c5cecd8b-813f-4bde-be28-371c54bcdfb9-kube-api-access-gdvx2\") pod \"designate-operator-controller-manager-b45d7bf98-lgljg\" (UID: \"c5cecd8b-813f-4bde-be28-371c54bcdfb9\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.681407 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-frbf8" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.685309 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trwxd\" (UniqueName: \"kubernetes.io/projected/d5375dff-af5c-4de8-b52b-acf18edc4fb2-kube-api-access-trwxd\") pod \"cinder-operator-controller-manager-7478f7dbf9-w2n48\" (UID: \"d5375dff-af5c-4de8-b52b-acf18edc4fb2\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.685580 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.687962 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvgp5\" (UniqueName: \"kubernetes.io/projected/e8b8ff3a-c099-4192-b061-33ff69fd2884-kube-api-access-kvgp5\") pod \"barbican-operator-controller-manager-7f86f8796f-hs8xc\" (UID: \"e8b8ff3a-c099-4192-b061-33ff69fd2884\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.689075 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28z54\" (UniqueName: \"kubernetes.io/projected/4508aa9d-2a89-4976-bd36-dc918900371e-kube-api-access-28z54\") pod \"glance-operator-controller-manager-78fdd796fd-sgnd2\" (UID: \"4508aa9d-2a89-4976-bd36-dc918900371e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.695475 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.696297 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.707550 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.714251 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.714474 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-d7tfg" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.721934 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.722922 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.724933 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-sptw4" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.729423 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.735875 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.736783 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.740207 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-tx7kn" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.744211 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.754838 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgfq6\" (UniqueName: \"kubernetes.io/projected/0e17a034-e3c9-434a-838f-8bfae6d010dd-kube-api-access-kgfq6\") pod \"heat-operator-controller-manager-594c8c9d5d-v95fq\" (UID: \"0e17a034-e3c9-434a-838f-8bfae6d010dd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.754958 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnrb7\" (UniqueName: \"kubernetes.io/projected/78a91159-fead-4133-98e4-5dd587f6b274-kube-api-access-nnrb7\") pod \"manila-operator-controller-manager-78c6999f6f-sc6f8\" (UID: \"78a91159-fead-4133-98e4-5dd587f6b274\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.754986 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brpd6\" (UniqueName: \"kubernetes.io/projected/cbafad55-0cc5-42d6-b721-b1f4e158251f-kube-api-access-brpd6\") pod \"ironic-operator-controller-manager-598f7747c9-flv4v\" (UID: \"cbafad55-0cc5-42d6-b721-b1f4e158251f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.755043 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg2p5\" (UniqueName: \"kubernetes.io/projected/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-kube-api-access-hg2p5\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.755069 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:55:51 crc kubenswrapper[4881]: E0126 12:55:51.755205 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.755249 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgdgz\" (UniqueName: \"kubernetes.io/projected/b6807e2b-25b9-4802-8086-2c6eab9ff308-kube-api-access-kgdgz\") pod \"keystone-operator-controller-manager-b8b6d4659-66d48\" (UID: \"b6807e2b-25b9-4802-8086-2c6eab9ff308\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" Jan 26 12:55:51 crc kubenswrapper[4881]: E0126 12:55:51.755263 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert podName:517d3e74-cfe4-4e5e-96b0-0780042b0dbd nodeName:}" failed. No retries permitted until 2026-01-26 12:55:52.255247238 +0000 UTC m=+1224.734557254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert") pod "infra-operator-controller-manager-694cf4f878-wkhcm" (UID: "517d3e74-cfe4-4e5e-96b0-0780042b0dbd") : secret "infra-operator-webhook-server-cert" not found Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.755282 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk4jl\" (UniqueName: \"kubernetes.io/projected/d998c88b-6b01-4e5f-bbab-a5aaee1a945b-kube-api-access-tk4jl\") pod \"horizon-operator-controller-manager-77d5c5b54f-rww6v\" (UID: \"d998c88b-6b01-4e5f-bbab-a5aaee1a945b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.762318 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.762704 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.770829 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk4jl\" (UniqueName: \"kubernetes.io/projected/d998c88b-6b01-4e5f-bbab-a5aaee1a945b-kube-api-access-tk4jl\") pod \"horizon-operator-controller-manager-77d5c5b54f-rww6v\" (UID: \"d998c88b-6b01-4e5f-bbab-a5aaee1a945b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.773338 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg2p5\" (UniqueName: \"kubernetes.io/projected/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-kube-api-access-hg2p5\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.776707 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.779401 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.780090 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brpd6\" (UniqueName: \"kubernetes.io/projected/cbafad55-0cc5-42d6-b721-b1f4e158251f-kube-api-access-brpd6\") pod \"ironic-operator-controller-manager-598f7747c9-flv4v\" (UID: \"cbafad55-0cc5-42d6-b721-b1f4e158251f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.782245 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.782658 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kshdx" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.804819 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.805633 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.810560 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgfq6\" (UniqueName: \"kubernetes.io/projected/0e17a034-e3c9-434a-838f-8bfae6d010dd-kube-api-access-kgfq6\") pod \"heat-operator-controller-manager-594c8c9d5d-v95fq\" (UID: \"0e17a034-e3c9-434a-838f-8bfae6d010dd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.824645 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.831315 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.832308 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.834604 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.836033 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jndsz" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.856715 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnrb7\" (UniqueName: \"kubernetes.io/projected/78a91159-fead-4133-98e4-5dd587f6b274-kube-api-access-nnrb7\") pod \"manila-operator-controller-manager-78c6999f6f-sc6f8\" (UID: \"78a91159-fead-4133-98e4-5dd587f6b274\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.856830 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlk2k\" (UniqueName: \"kubernetes.io/projected/d808c58e-a8df-4cbd-aee6-d87edd677e94-kube-api-access-mlk2k\") pod \"nova-operator-controller-manager-7bdb645866-r7dwj\" (UID: \"d808c58e-a8df-4cbd-aee6-d87edd677e94\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.856882 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgdgz\" (UniqueName: \"kubernetes.io/projected/b6807e2b-25b9-4802-8086-2c6eab9ff308-kube-api-access-kgdgz\") pod \"keystone-operator-controller-manager-b8b6d4659-66d48\" (UID: \"b6807e2b-25b9-4802-8086-2c6eab9ff308\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.856906 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9qf\" (UniqueName: \"kubernetes.io/projected/97b268cc-1863-494c-a47b-da0c52f76d39-kube-api-access-zf9qf\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-zkhss\" (UID: \"97b268cc-1863-494c-a47b-da0c52f76d39\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.856928 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwdf\" (UniqueName: \"kubernetes.io/projected/76b071ae-05bc-4142-9004-e5528d00c5cc-kube-api-access-wfwdf\") pod \"neutron-operator-controller-manager-78d58447c5-m8vjc\" (UID: \"76b071ae-05bc-4142-9004-e5528d00c5cc\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.881509 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.896010 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.901382 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnrb7\" (UniqueName: \"kubernetes.io/projected/78a91159-fead-4133-98e4-5dd587f6b274-kube-api-access-nnrb7\") pod \"manila-operator-controller-manager-78c6999f6f-sc6f8\" (UID: \"78a91159-fead-4133-98e4-5dd587f6b274\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.901967 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgdgz\" (UniqueName: \"kubernetes.io/projected/b6807e2b-25b9-4802-8086-2c6eab9ff308-kube-api-access-kgdgz\") pod \"keystone-operator-controller-manager-b8b6d4659-66d48\" (UID: \"b6807e2b-25b9-4802-8086-2c6eab9ff308\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.906984 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.916636 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.916735 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7kx2w" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.925039 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.937706 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.949321 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.950180 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.957727 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7fvj7" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.958743 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9qf\" (UniqueName: \"kubernetes.io/projected/97b268cc-1863-494c-a47b-da0c52f76d39-kube-api-access-zf9qf\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-zkhss\" (UID: \"97b268cc-1863-494c-a47b-da0c52f76d39\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.958774 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfwdf\" (UniqueName: \"kubernetes.io/projected/76b071ae-05bc-4142-9004-e5528d00c5cc-kube-api-access-wfwdf\") pod \"neutron-operator-controller-manager-78d58447c5-m8vjc\" (UID: \"76b071ae-05bc-4142-9004-e5528d00c5cc\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.958843 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8g2\" (UniqueName: \"kubernetes.io/projected/3451b01c-ed54-49be-ab3a-d8150976d2ec-kube-api-access-gg8g2\") pod \"octavia-operator-controller-manager-5f4cd88d46-r2p67\" (UID: \"3451b01c-ed54-49be-ab3a-d8150976d2ec\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.958881 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlk2k\" (UniqueName: \"kubernetes.io/projected/d808c58e-a8df-4cbd-aee6-d87edd677e94-kube-api-access-mlk2k\") pod \"nova-operator-controller-manager-7bdb645866-r7dwj\" (UID: \"d808c58e-a8df-4cbd-aee6-d87edd677e94\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.965440 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.976116 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp"] Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.978209 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.986202 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlk2k\" (UniqueName: \"kubernetes.io/projected/d808c58e-a8df-4cbd-aee6-d87edd677e94-kube-api-access-mlk2k\") pod \"nova-operator-controller-manager-7bdb645866-r7dwj\" (UID: \"d808c58e-a8df-4cbd-aee6-d87edd677e94\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.986644 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-c9zzj" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.994055 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfwdf\" (UniqueName: \"kubernetes.io/projected/76b071ae-05bc-4142-9004-e5528d00c5cc-kube-api-access-wfwdf\") pod \"neutron-operator-controller-manager-78d58447c5-m8vjc\" (UID: \"76b071ae-05bc-4142-9004-e5528d00c5cc\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc" Jan 26 12:55:51 crc kubenswrapper[4881]: I0126 12:55:51.994878 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9qf\" (UniqueName: \"kubernetes.io/projected/97b268cc-1863-494c-a47b-da0c52f76d39-kube-api-access-zf9qf\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-zkhss\" (UID: \"97b268cc-1863-494c-a47b-da0c52f76d39\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.005880 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.022302 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.023074 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.027509 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-j6sm4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.039984 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.050481 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.061149 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzm8g\" (UniqueName: \"kubernetes.io/projected/de6b2c73-a5db-4333-91e1-7722f0ba1127-kube-api-access-nzm8g\") pod \"placement-operator-controller-manager-79d5ccc684-6wtsp\" (UID: \"de6b2c73-a5db-4333-91e1-7722f0ba1127\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.061229 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdz9v\" (UniqueName: \"kubernetes.io/projected/5b1abb90-faa0-4b72-9d20-f84ddf952245-kube-api-access-xdz9v\") pod \"ovn-operator-controller-manager-6f75f45d54-pz264\" (UID: \"5b1abb90-faa0-4b72-9d20-f84ddf952245\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.061270 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.061295 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl44m\" (UniqueName: \"kubernetes.io/projected/8cc0e35b-757a-46fc-bc17-f586426c9b82-kube-api-access-fl44m\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.061333 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg8g2\" (UniqueName: \"kubernetes.io/projected/3451b01c-ed54-49be-ab3a-d8150976d2ec-kube-api-access-gg8g2\") pod \"octavia-operator-controller-manager-5f4cd88d46-r2p67\" (UID: \"3451b01c-ed54-49be-ab3a-d8150976d2ec\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.070168 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.076495 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg8g2\" (UniqueName: \"kubernetes.io/projected/3451b01c-ed54-49be-ab3a-d8150976d2ec-kube-api-access-gg8g2\") pod \"octavia-operator-controller-manager-5f4cd88d46-r2p67\" (UID: \"3451b01c-ed54-49be-ab3a-d8150976d2ec\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.103822 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.106333 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.106410 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.124792 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-98h4q" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.130669 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.137459 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.149896 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.152104 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.157527 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2xg2h" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.163376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzm8g\" (UniqueName: \"kubernetes.io/projected/de6b2c73-a5db-4333-91e1-7722f0ba1127-kube-api-access-nzm8g\") pod \"placement-operator-controller-manager-79d5ccc684-6wtsp\" (UID: \"de6b2c73-a5db-4333-91e1-7722f0ba1127\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.163439 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdz9v\" (UniqueName: \"kubernetes.io/projected/5b1abb90-faa0-4b72-9d20-f84ddf952245-kube-api-access-xdz9v\") pod \"ovn-operator-controller-manager-6f75f45d54-pz264\" (UID: \"5b1abb90-faa0-4b72-9d20-f84ddf952245\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.163555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.163594 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl44m\" (UniqueName: \"kubernetes.io/projected/8cc0e35b-757a-46fc-bc17-f586426c9b82-kube-api-access-fl44m\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.163625 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62zjb\" (UniqueName: \"kubernetes.io/projected/0a7aea9c-0f85-45d1-9c90-e06acb42f500-kube-api-access-62zjb\") pod \"swift-operator-controller-manager-547cbdb99f-zxw9s\" (UID: \"0a7aea9c-0f85-45d1-9c90-e06acb42f500\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" Jan 26 12:55:52 crc kubenswrapper[4881]: E0126 12:55:52.164028 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 12:55:52 crc kubenswrapper[4881]: E0126 12:55:52.164121 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert podName:8cc0e35b-757a-46fc-bc17-f586426c9b82 nodeName:}" failed. No retries permitted until 2026-01-26 12:55:52.664104069 +0000 UTC m=+1225.143414095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854jj499" (UID: "8cc0e35b-757a-46fc-bc17-f586426c9b82") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.167777 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.187301 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.207692 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.227041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdz9v\" (UniqueName: \"kubernetes.io/projected/5b1abb90-faa0-4b72-9d20-f84ddf952245-kube-api-access-xdz9v\") pod \"ovn-operator-controller-manager-6f75f45d54-pz264\" (UID: \"5b1abb90-faa0-4b72-9d20-f84ddf952245\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.239508 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl44m\" (UniqueName: \"kubernetes.io/projected/8cc0e35b-757a-46fc-bc17-f586426c9b82-kube-api-access-fl44m\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.243616 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzm8g\" (UniqueName: \"kubernetes.io/projected/de6b2c73-a5db-4333-91e1-7722f0ba1127-kube-api-access-nzm8g\") pod \"placement-operator-controller-manager-79d5ccc684-6wtsp\" (UID: \"de6b2c73-a5db-4333-91e1-7722f0ba1127\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.265214 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.265316 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5pj7\" (UniqueName: \"kubernetes.io/projected/0591b1a9-0d5f-4f0a-beca-9ed62627012e-kube-api-access-k5pj7\") pod \"test-operator-controller-manager-69797bbcbd-ff2c4\" (UID: \"0591b1a9-0d5f-4f0a-beca-9ed62627012e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.265338 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9slzh\" (UniqueName: \"kubernetes.io/projected/973ffd61-1f3c-4e2f-9315-dae216499f96-kube-api-access-9slzh\") pod \"telemetry-operator-controller-manager-85cd9769bb-sqqcs\" (UID: \"973ffd61-1f3c-4e2f-9315-dae216499f96\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.265359 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62zjb\" (UniqueName: \"kubernetes.io/projected/0a7aea9c-0f85-45d1-9c90-e06acb42f500-kube-api-access-62zjb\") pod \"swift-operator-controller-manager-547cbdb99f-zxw9s\" (UID: \"0a7aea9c-0f85-45d1-9c90-e06acb42f500\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" Jan 26 12:55:52 crc kubenswrapper[4881]: E0126 12:55:52.265720 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 12:55:52 crc kubenswrapper[4881]: E0126 12:55:52.265758 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert podName:517d3e74-cfe4-4e5e-96b0-0780042b0dbd nodeName:}" failed. No retries permitted until 2026-01-26 12:55:53.265743287 +0000 UTC m=+1225.745053313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert") pod "infra-operator-controller-manager-694cf4f878-wkhcm" (UID: "517d3e74-cfe4-4e5e-96b0-0780042b0dbd") : secret "infra-operator-webhook-server-cert" not found Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.265850 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.279968 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.281989 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9" Jan 26 12:55:52 crc kubenswrapper[4881]: W0126 12:55:52.282615 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5375dff_af5c_4de8_b52b_acf18edc4fb2.slice/crio-a50de3f40b63661877177102802fc9ab8849e85a9ed99d062abdfb5e96956532 WatchSource:0}: Error finding container a50de3f40b63661877177102802fc9ab8849e85a9ed99d062abdfb5e96956532: Status 404 returned error can't find the container with id a50de3f40b63661877177102802fc9ab8849e85a9ed99d062abdfb5e96956532 Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.289351 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pkb95" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.289800 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62zjb\" (UniqueName: \"kubernetes.io/projected/0a7aea9c-0f85-45d1-9c90-e06acb42f500-kube-api-access-62zjb\") pod \"swift-operator-controller-manager-547cbdb99f-zxw9s\" (UID: \"0a7aea9c-0f85-45d1-9c90-e06acb42f500\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.303159 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.323583 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.324449 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.325749 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.329214 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mdmhf" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.329462 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.329539 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.334187 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.351288 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.364072 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.366204 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.369187 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rqfl\" (UniqueName: \"kubernetes.io/projected/ab3681e4-6e5f-4f8d-909d-8d7801366f54-kube-api-access-2rqfl\") pod \"watcher-operator-controller-manager-5784f86c76-zbvz9\" (UID: \"ab3681e4-6e5f-4f8d-909d-8d7801366f54\") " pod="openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.369299 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5pj7\" (UniqueName: \"kubernetes.io/projected/0591b1a9-0d5f-4f0a-beca-9ed62627012e-kube-api-access-k5pj7\") pod \"test-operator-controller-manager-69797bbcbd-ff2c4\" (UID: \"0591b1a9-0d5f-4f0a-beca-9ed62627012e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.369323 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9slzh\" (UniqueName: \"kubernetes.io/projected/973ffd61-1f3c-4e2f-9315-dae216499f96-kube-api-access-9slzh\") pod \"telemetry-operator-controller-manager-85cd9769bb-sqqcs\" (UID: \"973ffd61-1f3c-4e2f-9315-dae216499f96\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.373400 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-b78fn" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.426494 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9slzh\" (UniqueName: \"kubernetes.io/projected/973ffd61-1f3c-4e2f-9315-dae216499f96-kube-api-access-9slzh\") pod \"telemetry-operator-controller-manager-85cd9769bb-sqqcs\" (UID: \"973ffd61-1f3c-4e2f-9315-dae216499f96\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.427048 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.429432 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5pj7\" (UniqueName: \"kubernetes.io/projected/0591b1a9-0d5f-4f0a-beca-9ed62627012e-kube-api-access-k5pj7\") pod \"test-operator-controller-manager-69797bbcbd-ff2c4\" (UID: \"0591b1a9-0d5f-4f0a-beca-9ed62627012e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.460866 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.467275 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.471206 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.471344 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjsgz\" (UniqueName: \"kubernetes.io/projected/a5f220e0-8c4f-4915-b0d0-cb85cc7f7850-kube-api-access-hjsgz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vgjn4\" (UID: \"a5f220e0-8c4f-4915-b0d0-cb85cc7f7850\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.471403 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.471424 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgqrc\" (UniqueName: \"kubernetes.io/projected/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-kube-api-access-kgqrc\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.471446 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rqfl\" (UniqueName: \"kubernetes.io/projected/ab3681e4-6e5f-4f8d-909d-8d7801366f54-kube-api-access-2rqfl\") pod \"watcher-operator-controller-manager-5784f86c76-zbvz9\" (UID: \"ab3681e4-6e5f-4f8d-909d-8d7801366f54\") " pod="openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.487589 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.514082 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rqfl\" (UniqueName: \"kubernetes.io/projected/ab3681e4-6e5f-4f8d-909d-8d7801366f54-kube-api-access-2rqfl\") pod \"watcher-operator-controller-manager-5784f86c76-zbvz9\" (UID: \"ab3681e4-6e5f-4f8d-909d-8d7801366f54\") " pod="openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.524791 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.537460 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.549506 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.575165 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjsgz\" (UniqueName: \"kubernetes.io/projected/a5f220e0-8c4f-4915-b0d0-cb85cc7f7850-kube-api-access-hjsgz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vgjn4\" (UID: \"a5f220e0-8c4f-4915-b0d0-cb85cc7f7850\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.575563 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.575588 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgqrc\" (UniqueName: \"kubernetes.io/projected/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-kube-api-access-kgqrc\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.575653 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:52 crc kubenswrapper[4881]: E0126 12:55:52.575743 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 12:55:52 crc kubenswrapper[4881]: E0126 12:55:52.575785 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs podName:74d53f54-a284-45f0-ae81-5c25d2c5cbe1 nodeName:}" failed. No retries permitted until 2026-01-26 12:55:53.075771738 +0000 UTC m=+1225.555081764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs") pod "openstack-operator-controller-manager-649ccf9654-zlvc6" (UID: "74d53f54-a284-45f0-ae81-5c25d2c5cbe1") : secret "webhook-server-cert" not found Jan 26 12:55:52 crc kubenswrapper[4881]: E0126 12:55:52.575915 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 12:55:52 crc kubenswrapper[4881]: E0126 12:55:52.575940 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs podName:74d53f54-a284-45f0-ae81-5c25d2c5cbe1 nodeName:}" failed. No retries permitted until 2026-01-26 12:55:53.075933822 +0000 UTC m=+1225.555243838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs") pod "openstack-operator-controller-manager-649ccf9654-zlvc6" (UID: "74d53f54-a284-45f0-ae81-5c25d2c5cbe1") : secret "metrics-server-cert" not found Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.584686 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.607382 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjsgz\" (UniqueName: \"kubernetes.io/projected/a5f220e0-8c4f-4915-b0d0-cb85cc7f7850-kube-api-access-hjsgz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vgjn4\" (UID: \"a5f220e0-8c4f-4915-b0d0-cb85cc7f7850\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.610487 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgqrc\" (UniqueName: \"kubernetes.io/projected/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-kube-api-access-kgqrc\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.630896 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.678421 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:55:52 crc kubenswrapper[4881]: E0126 12:55:52.678590 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 12:55:52 crc kubenswrapper[4881]: E0126 12:55:52.678636 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert podName:8cc0e35b-757a-46fc-bc17-f586426c9b82 nodeName:}" failed. No retries permitted until 2026-01-26 12:55:53.678620586 +0000 UTC m=+1226.157930602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854jj499" (UID: "8cc0e35b-757a-46fc-bc17-f586426c9b82") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.853897 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.892700 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v"] Jan 26 12:55:52 crc kubenswrapper[4881]: W0126 12:55:52.908679 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd998c88b_6b01_4e5f_bbab_a5aaee1a945b.slice/crio-a3c7427d8c4ac65e6ac363a21336b82fc34e8f08f86167a5c2742a31750dda51 WatchSource:0}: Error finding container a3c7427d8c4ac65e6ac363a21336b82fc34e8f08f86167a5c2742a31750dda51: Status 404 returned error can't find the container with id a3c7427d8c4ac65e6ac363a21336b82fc34e8f08f86167a5c2742a31750dda51 Jan 26 12:55:52 crc kubenswrapper[4881]: W0126 12:55:52.949014 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6807e2b_25b9_4802_8086_2c6eab9ff308.slice/crio-f464671b157c856fa80d9d9d52043e52ace686518c6b5159c483bba615b0b270 WatchSource:0}: Error finding container f464671b157c856fa80d9d9d52043e52ace686518c6b5159c483bba615b0b270: Status 404 returned error can't find the container with id f464671b157c856fa80d9d9d52043e52ace686518c6b5159c483bba615b0b270 Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.949317 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.980354 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8"] Jan 26 12:55:52 crc kubenswrapper[4881]: I0126 12:55:52.992879 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v"] Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.093334 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.093433 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.093483 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.093556 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs podName:74d53f54-a284-45f0-ae81-5c25d2c5cbe1 nodeName:}" failed. No retries permitted until 2026-01-26 12:55:54.093540626 +0000 UTC m=+1226.572850652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs") pod "openstack-operator-controller-manager-649ccf9654-zlvc6" (UID: "74d53f54-a284-45f0-ae81-5c25d2c5cbe1") : secret "metrics-server-cert" not found Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.093601 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.093653 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs podName:74d53f54-a284-45f0-ae81-5c25d2c5cbe1 nodeName:}" failed. No retries permitted until 2026-01-26 12:55:54.093635728 +0000 UTC m=+1226.572945754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs") pod "openstack-operator-controller-manager-649ccf9654-zlvc6" (UID: "74d53f54-a284-45f0-ae81-5c25d2c5cbe1") : secret "webhook-server-cert" not found Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.138038 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67"] Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.164970 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss"] Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.191288 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc"] Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.195358 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48" event={"ID":"d5375dff-af5c-4de8-b52b-acf18edc4fb2","Type":"ContainerStarted","Data":"a50de3f40b63661877177102802fc9ab8849e85a9ed99d062abdfb5e96956532"} Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.212927 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq" event={"ID":"0e17a034-e3c9-434a-838f-8bfae6d010dd","Type":"ContainerStarted","Data":"2d19f4bd18de3bc0d52513252b144c13799ec49064937d9068a1f8bcabd82d42"} Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.230817 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg" event={"ID":"c5cecd8b-813f-4bde-be28-371c54bcdfb9","Type":"ContainerStarted","Data":"1bc58a52cc9425e92646ca46a6b38c8e9fcdb478c8f121bb1e53837b2bce4bd9"} Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.243748 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" event={"ID":"78a91159-fead-4133-98e4-5dd587f6b274","Type":"ContainerStarted","Data":"668a4faa80659b7592b41d47d2826c529bc02d6d39561d4aa015113824bfe63f"} Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.248730 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp"] Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.257038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" event={"ID":"d998c88b-6b01-4e5f-bbab-a5aaee1a945b","Type":"ContainerStarted","Data":"a3c7427d8c4ac65e6ac363a21336b82fc34e8f08f86167a5c2742a31750dda51"} Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.283853 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2" event={"ID":"4508aa9d-2a89-4976-bd36-dc918900371e","Type":"ContainerStarted","Data":"7990857802acdc7fa0298fc59526051e78aec9f03b50eaa5725441dbd3db2973"} Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.310651 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s"] Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.310731 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc" event={"ID":"e8b8ff3a-c099-4192-b061-33ff69fd2884","Type":"ContainerStarted","Data":"2a3ea7fbc97664064f18639db257f9fb34fea78615ea201bd63fd2569a8e3ede"} Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.316311 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.316606 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.316662 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert podName:517d3e74-cfe4-4e5e-96b0-0780042b0dbd nodeName:}" failed. No retries permitted until 2026-01-26 12:55:55.316646106 +0000 UTC m=+1227.795956132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert") pod "infra-operator-controller-manager-694cf4f878-wkhcm" (UID: "517d3e74-cfe4-4e5e-96b0-0780042b0dbd") : secret "infra-operator-webhook-server-cert" not found Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.329726 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67" event={"ID":"3451b01c-ed54-49be-ab3a-d8150976d2ec","Type":"ContainerStarted","Data":"171536fe46a2cfe8c5f9de438a84c3dba59a9e035bcc28570a82d58fbd7c7ae2"} Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.337225 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs"] Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.339924 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" event={"ID":"b6807e2b-25b9-4802-8086-2c6eab9ff308","Type":"ContainerStarted","Data":"f464671b157c856fa80d9d9d52043e52ace686518c6b5159c483bba615b0b270"} Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.350355 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" event={"ID":"cbafad55-0cc5-42d6-b721-b1f4e158251f","Type":"ContainerStarted","Data":"42199c1b98a290de8235b0d71931d886d65921470bdf095dc8ef047b1d03f1a2"} Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.364884 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9slzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-sqqcs_openstack-operators(973ffd61-1f3c-4e2f-9315-dae216499f96): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.365019 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdz9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-pz264_openstack-operators(5b1abb90-faa0-4b72-9d20-f84ddf952245): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.366339 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" podUID="5b1abb90-faa0-4b72-9d20-f84ddf952245" Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.366410 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" podUID="973ffd61-1f3c-4e2f-9315-dae216499f96" Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.383213 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264"] Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.425208 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj"] Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.434128 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4"] Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.441470 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k5pj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-ff2c4_openstack-operators(0591b1a9-0d5f-4f0a-beca-9ed62627012e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.443448 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" podUID="0591b1a9-0d5f-4f0a-beca-9ed62627012e" Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.527660 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9"] Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.551481 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4"] Jan 26 12:55:53 crc kubenswrapper[4881]: W0126 12:55:53.566480 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5f220e0_8c4f_4915_b0d0_cb85cc7f7850.slice/crio-047b7da1a2c1a7d628e883b13a81eb027d86e2c40d63d213d7f51c3e26d29b52 WatchSource:0}: Error finding container 047b7da1a2c1a7d628e883b13a81eb027d86e2c40d63d213d7f51c3e26d29b52: Status 404 returned error can't find the container with id 047b7da1a2c1a7d628e883b13a81eb027d86e2c40d63d213d7f51c3e26d29b52 Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.568323 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hjsgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vgjn4_openstack-operators(a5f220e0-8c4f-4915-b0d0-cb85cc7f7850): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.569972 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" podUID="a5f220e0-8c4f-4915-b0d0-cb85cc7f7850" Jan 26 12:55:53 crc kubenswrapper[4881]: I0126 12:55:53.720607 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.720803 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 12:55:53 crc kubenswrapper[4881]: E0126 12:55:53.720881 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert podName:8cc0e35b-757a-46fc-bc17-f586426c9b82 nodeName:}" failed. No retries permitted until 2026-01-26 12:55:55.720862394 +0000 UTC m=+1228.200172420 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854jj499" (UID: "8cc0e35b-757a-46fc-bc17-f586426c9b82") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.137242 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:54 crc kubenswrapper[4881]: E0126 12:55:54.137327 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.137357 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:54 crc kubenswrapper[4881]: E0126 12:55:54.137379 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs podName:74d53f54-a284-45f0-ae81-5c25d2c5cbe1 nodeName:}" failed. No retries permitted until 2026-01-26 12:55:56.137365021 +0000 UTC m=+1228.616675037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs") pod "openstack-operator-controller-manager-649ccf9654-zlvc6" (UID: "74d53f54-a284-45f0-ae81-5c25d2c5cbe1") : secret "webhook-server-cert" not found Jan 26 12:55:54 crc kubenswrapper[4881]: E0126 12:55:54.137498 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 12:55:54 crc kubenswrapper[4881]: E0126 12:55:54.137581 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs podName:74d53f54-a284-45f0-ae81-5c25d2c5cbe1 nodeName:}" failed. No retries permitted until 2026-01-26 12:55:56.137564116 +0000 UTC m=+1228.616874142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs") pod "openstack-operator-controller-manager-649ccf9654-zlvc6" (UID: "74d53f54-a284-45f0-ae81-5c25d2c5cbe1") : secret "metrics-server-cert" not found Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.357616 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" event={"ID":"0591b1a9-0d5f-4f0a-beca-9ed62627012e","Type":"ContainerStarted","Data":"3e6c2b1c3cca3e570c1ca5d2a3f52e76365b21ed17928b020ee53d832ff6781e"} Jan 26 12:55:54 crc kubenswrapper[4881]: E0126 12:55:54.359164 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" podUID="0591b1a9-0d5f-4f0a-beca-9ed62627012e" Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.360417 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" event={"ID":"0a7aea9c-0f85-45d1-9c90-e06acb42f500","Type":"ContainerStarted","Data":"d4fcb028cfa218088597a104976930f1397e8fc56ef50e759a7bcbae6828ffc9"} Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.361301 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" event={"ID":"5b1abb90-faa0-4b72-9d20-f84ddf952245","Type":"ContainerStarted","Data":"0b8fe614d673eff7c89b502d7b2381323c0beb6d2e0744d786178ae9c3d9f9c7"} Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.366833 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" event={"ID":"a5f220e0-8c4f-4915-b0d0-cb85cc7f7850","Type":"ContainerStarted","Data":"047b7da1a2c1a7d628e883b13a81eb027d86e2c40d63d213d7f51c3e26d29b52"} Jan 26 12:55:54 crc kubenswrapper[4881]: E0126 12:55:54.373208 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" podUID="5b1abb90-faa0-4b72-9d20-f84ddf952245" Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.384026 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp" event={"ID":"de6b2c73-a5db-4333-91e1-7722f0ba1127","Type":"ContainerStarted","Data":"5b18a634f9853c6b276210f6591b3c7202827ff2c12d6e1da34bb9086acdaaf1"} Jan 26 12:55:54 crc kubenswrapper[4881]: E0126 12:55:54.387105 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" podUID="a5f220e0-8c4f-4915-b0d0-cb85cc7f7850" Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.394248 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" event={"ID":"973ffd61-1f3c-4e2f-9315-dae216499f96","Type":"ContainerStarted","Data":"24266a7e74f3074c2be679855b97f3b2ad085dcc633d5a6a76a56fc0646a709c"} Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.396613 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss" event={"ID":"97b268cc-1863-494c-a47b-da0c52f76d39","Type":"ContainerStarted","Data":"aa6f420bf30fe90c112a16ce7fb29d1bf9e8e4bf797f9b851abb46b5a6c1b847"} Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.401553 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9" event={"ID":"ab3681e4-6e5f-4f8d-909d-8d7801366f54","Type":"ContainerStarted","Data":"50829eb607e2b54f81cd0a862742ae455bfdf687c6ebeab45b07d7e47deddf17"} Jan 26 12:55:54 crc kubenswrapper[4881]: E0126 12:55:54.405891 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" podUID="973ffd61-1f3c-4e2f-9315-dae216499f96" Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.411496 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc" event={"ID":"76b071ae-05bc-4142-9004-e5528d00c5cc","Type":"ContainerStarted","Data":"47e252d03f47742528ee100fde64ddda1fd63bd49b9b001db79cec1f80e01079"} Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.424788 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" event={"ID":"d808c58e-a8df-4cbd-aee6-d87edd677e94","Type":"ContainerStarted","Data":"47b9b091136846f48b87fa2b1387cbde2d72b4490aee87e5d8ca362fd14c78c5"} Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.788936 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:55:54 crc kubenswrapper[4881]: I0126 12:55:54.788997 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:55:55 crc kubenswrapper[4881]: I0126 12:55:55.353043 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:55:55 crc kubenswrapper[4881]: E0126 12:55:55.353416 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 12:55:55 crc kubenswrapper[4881]: E0126 12:55:55.353460 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert podName:517d3e74-cfe4-4e5e-96b0-0780042b0dbd nodeName:}" failed. No retries permitted until 2026-01-26 12:55:59.353447689 +0000 UTC m=+1231.832757715 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert") pod "infra-operator-controller-manager-694cf4f878-wkhcm" (UID: "517d3e74-cfe4-4e5e-96b0-0780042b0dbd") : secret "infra-operator-webhook-server-cert" not found Jan 26 12:55:55 crc kubenswrapper[4881]: E0126 12:55:55.441433 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" podUID="a5f220e0-8c4f-4915-b0d0-cb85cc7f7850" Jan 26 12:55:55 crc kubenswrapper[4881]: E0126 12:55:55.441852 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" podUID="5b1abb90-faa0-4b72-9d20-f84ddf952245" Jan 26 12:55:55 crc kubenswrapper[4881]: E0126 12:55:55.441948 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" podUID="0591b1a9-0d5f-4f0a-beca-9ed62627012e" Jan 26 12:55:55 crc kubenswrapper[4881]: E0126 12:55:55.441993 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" podUID="973ffd61-1f3c-4e2f-9315-dae216499f96" Jan 26 12:55:55 crc kubenswrapper[4881]: I0126 12:55:55.761157 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:55:55 crc kubenswrapper[4881]: E0126 12:55:55.761332 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 12:55:55 crc kubenswrapper[4881]: E0126 12:55:55.761571 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert podName:8cc0e35b-757a-46fc-bc17-f586426c9b82 nodeName:}" failed. No retries permitted until 2026-01-26 12:55:59.761553181 +0000 UTC m=+1232.240863207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854jj499" (UID: "8cc0e35b-757a-46fc-bc17-f586426c9b82") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 12:55:56 crc kubenswrapper[4881]: I0126 12:55:56.170183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:56 crc kubenswrapper[4881]: I0126 12:55:56.170313 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:55:56 crc kubenswrapper[4881]: E0126 12:55:56.172135 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 12:55:56 crc kubenswrapper[4881]: E0126 12:55:56.172182 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs podName:74d53f54-a284-45f0-ae81-5c25d2c5cbe1 nodeName:}" failed. No retries permitted until 2026-01-26 12:56:00.172169465 +0000 UTC m=+1232.651479481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs") pod "openstack-operator-controller-manager-649ccf9654-zlvc6" (UID: "74d53f54-a284-45f0-ae81-5c25d2c5cbe1") : secret "metrics-server-cert" not found Jan 26 12:55:56 crc kubenswrapper[4881]: E0126 12:55:56.172185 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 12:55:56 crc kubenswrapper[4881]: E0126 12:55:56.172247 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs podName:74d53f54-a284-45f0-ae81-5c25d2c5cbe1 nodeName:}" failed. No retries permitted until 2026-01-26 12:56:00.172226216 +0000 UTC m=+1232.651536312 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs") pod "openstack-operator-controller-manager-649ccf9654-zlvc6" (UID: "74d53f54-a284-45f0-ae81-5c25d2c5cbe1") : secret "webhook-server-cert" not found Jan 26 12:55:59 crc kubenswrapper[4881]: I0126 12:55:59.416235 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:55:59 crc kubenswrapper[4881]: E0126 12:55:59.416397 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 12:55:59 crc kubenswrapper[4881]: E0126 12:55:59.416983 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert podName:517d3e74-cfe4-4e5e-96b0-0780042b0dbd nodeName:}" failed. No retries permitted until 2026-01-26 12:56:07.416965568 +0000 UTC m=+1239.896275594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert") pod "infra-operator-controller-manager-694cf4f878-wkhcm" (UID: "517d3e74-cfe4-4e5e-96b0-0780042b0dbd") : secret "infra-operator-webhook-server-cert" not found Jan 26 12:55:59 crc kubenswrapper[4881]: I0126 12:55:59.822367 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:55:59 crc kubenswrapper[4881]: E0126 12:55:59.822480 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 12:55:59 crc kubenswrapper[4881]: E0126 12:55:59.822554 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert podName:8cc0e35b-757a-46fc-bc17-f586426c9b82 nodeName:}" failed. No retries permitted until 2026-01-26 12:56:07.822536919 +0000 UTC m=+1240.301846955 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854jj499" (UID: "8cc0e35b-757a-46fc-bc17-f586426c9b82") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 12:56:00 crc kubenswrapper[4881]: I0126 12:56:00.228785 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:56:00 crc kubenswrapper[4881]: I0126 12:56:00.228905 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:56:00 crc kubenswrapper[4881]: E0126 12:56:00.228999 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 12:56:00 crc kubenswrapper[4881]: E0126 12:56:00.229097 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs podName:74d53f54-a284-45f0-ae81-5c25d2c5cbe1 nodeName:}" failed. No retries permitted until 2026-01-26 12:56:08.229072603 +0000 UTC m=+1240.708382639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs") pod "openstack-operator-controller-manager-649ccf9654-zlvc6" (UID: "74d53f54-a284-45f0-ae81-5c25d2c5cbe1") : secret "webhook-server-cert" not found Jan 26 12:56:00 crc kubenswrapper[4881]: E0126 12:56:00.229096 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 12:56:00 crc kubenswrapper[4881]: E0126 12:56:00.229193 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs podName:74d53f54-a284-45f0-ae81-5c25d2c5cbe1 nodeName:}" failed. No retries permitted until 2026-01-26 12:56:08.229164395 +0000 UTC m=+1240.708474471 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs") pod "openstack-operator-controller-manager-649ccf9654-zlvc6" (UID: "74d53f54-a284-45f0-ae81-5c25d2c5cbe1") : secret "metrics-server-cert" not found Jan 26 12:56:05 crc kubenswrapper[4881]: E0126 12:56:05.902149 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 26 12:56:05 crc kubenswrapper[4881]: E0126 12:56:05.902876 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnrb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-sc6f8_openstack-operators(78a91159-fead-4133-98e4-5dd587f6b274): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:56:05 crc kubenswrapper[4881]: E0126 12:56:05.904110 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" podUID="78a91159-fead-4133-98e4-5dd587f6b274" Jan 26 12:56:06 crc kubenswrapper[4881]: E0126 12:56:06.562823 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" podUID="78a91159-fead-4133-98e4-5dd587f6b274" Jan 26 12:56:06 crc kubenswrapper[4881]: E0126 12:56:06.775588 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 26 12:56:06 crc kubenswrapper[4881]: E0126 12:56:06.775819 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-brpd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-flv4v_openstack-operators(cbafad55-0cc5-42d6-b721-b1f4e158251f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:56:06 crc kubenswrapper[4881]: E0126 12:56:06.777076 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" podUID="cbafad55-0cc5-42d6-b721-b1f4e158251f" Jan 26 12:56:07 crc kubenswrapper[4881]: I0126 12:56:07.458561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:56:07 crc kubenswrapper[4881]: I0126 12:56:07.473148 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/517d3e74-cfe4-4e5e-96b0-0780042b0dbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-wkhcm\" (UID: \"517d3e74-cfe4-4e5e-96b0-0780042b0dbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:56:07 crc kubenswrapper[4881]: E0126 12:56:07.570440 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" podUID="cbafad55-0cc5-42d6-b721-b1f4e158251f" Jan 26 12:56:07 crc kubenswrapper[4881]: I0126 12:56:07.770122 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:56:07 crc kubenswrapper[4881]: I0126 12:56:07.864027 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:56:07 crc kubenswrapper[4881]: I0126 12:56:07.867214 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc0e35b-757a-46fc-bc17-f586426c9b82-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854jj499\" (UID: \"8cc0e35b-757a-46fc-bc17-f586426c9b82\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:56:08 crc kubenswrapper[4881]: I0126 12:56:08.132038 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:56:08 crc kubenswrapper[4881]: I0126 12:56:08.269969 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:56:08 crc kubenswrapper[4881]: I0126 12:56:08.270143 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:56:08 crc kubenswrapper[4881]: I0126 12:56:08.275392 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-webhook-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:56:08 crc kubenswrapper[4881]: I0126 12:56:08.275471 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74d53f54-a284-45f0-ae81-5c25d2c5cbe1-metrics-certs\") pod \"openstack-operator-controller-manager-649ccf9654-zlvc6\" (UID: \"74d53f54-a284-45f0-ae81-5c25d2c5cbe1\") " pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:56:08 crc kubenswrapper[4881]: I0126 12:56:08.415883 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:56:10 crc kubenswrapper[4881]: E0126 12:56:10.062754 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 26 12:56:10 crc kubenswrapper[4881]: E0126 12:56:10.062960 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tk4jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-rww6v_openstack-operators(d998c88b-6b01-4e5f-bbab-a5aaee1a945b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:56:10 crc kubenswrapper[4881]: E0126 12:56:10.064221 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" podUID="d998c88b-6b01-4e5f-bbab-a5aaee1a945b" Jan 26 12:56:10 crc kubenswrapper[4881]: E0126 12:56:10.593158 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" podUID="d998c88b-6b01-4e5f-bbab-a5aaee1a945b" Jan 26 12:56:10 crc kubenswrapper[4881]: E0126 12:56:10.909760 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 12:56:10 crc kubenswrapper[4881]: E0126 12:56:10.909987 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mlk2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-r7dwj_openstack-operators(d808c58e-a8df-4cbd-aee6-d87edd677e94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:56:10 crc kubenswrapper[4881]: E0126 12:56:10.911248 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" podUID="d808c58e-a8df-4cbd-aee6-d87edd677e94" Jan 26 12:56:11 crc kubenswrapper[4881]: E0126 12:56:11.506953 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 26 12:56:11 crc kubenswrapper[4881]: E0126 12:56:11.507132 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62zjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-zxw9s_openstack-operators(0a7aea9c-0f85-45d1-9c90-e06acb42f500): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:56:11 crc kubenswrapper[4881]: E0126 12:56:11.510027 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" podUID="0a7aea9c-0f85-45d1-9c90-e06acb42f500" Jan 26 12:56:11 crc kubenswrapper[4881]: E0126 12:56:11.597362 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" podUID="0a7aea9c-0f85-45d1-9c90-e06acb42f500" Jan 26 12:56:11 crc kubenswrapper[4881]: E0126 12:56:11.597845 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" podUID="d808c58e-a8df-4cbd-aee6-d87edd677e94" Jan 26 12:56:12 crc kubenswrapper[4881]: E0126 12:56:12.237886 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 12:56:12 crc kubenswrapper[4881]: E0126 12:56:12.238095 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgdgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-66d48_openstack-operators(b6807e2b-25b9-4802-8086-2c6eab9ff308): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:56:12 crc kubenswrapper[4881]: E0126 12:56:12.239299 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" podUID="b6807e2b-25b9-4802-8086-2c6eab9ff308" Jan 26 12:56:12 crc kubenswrapper[4881]: E0126 12:56:12.606141 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" podUID="b6807e2b-25b9-4802-8086-2c6eab9ff308" Jan 26 12:56:19 crc kubenswrapper[4881]: I0126 12:56:19.838013 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499"] Jan 26 12:56:22 crc kubenswrapper[4881]: E0126 12:56:22.866972 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 26 12:56:22 crc kubenswrapper[4881]: E0126 12:56:22.868102 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9slzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-sqqcs_openstack-operators(973ffd61-1f3c-4e2f-9315-dae216499f96): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:56:22 crc kubenswrapper[4881]: E0126 12:56:22.869673 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" podUID="973ffd61-1f3c-4e2f-9315-dae216499f96" Jan 26 12:56:23 crc kubenswrapper[4881]: I0126 12:56:23.233672 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm"] Jan 26 12:56:23 crc kubenswrapper[4881]: I0126 12:56:23.318927 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6"] Jan 26 12:56:23 crc kubenswrapper[4881]: E0126 12:56:23.611380 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 26 12:56:23 crc kubenswrapper[4881]: E0126 12:56:23.611531 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hjsgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vgjn4_openstack-operators(a5f220e0-8c4f-4915-b0d0-cb85cc7f7850): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:56:23 crc kubenswrapper[4881]: E0126 12:56:23.614207 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" podUID="a5f220e0-8c4f-4915-b0d0-cb85cc7f7850" Jan 26 12:56:23 crc kubenswrapper[4881]: W0126 12:56:23.623430 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod517d3e74_cfe4_4e5e_96b0_0780042b0dbd.slice/crio-9d1e7d987e97aa7b6b4980b643de1ac520ef80ef1e1b990b655309c5f615e10c WatchSource:0}: Error finding container 9d1e7d987e97aa7b6b4980b643de1ac520ef80ef1e1b990b655309c5f615e10c: Status 404 returned error can't find the container with id 9d1e7d987e97aa7b6b4980b643de1ac520ef80ef1e1b990b655309c5f615e10c Jan 26 12:56:23 crc kubenswrapper[4881]: I0126 12:56:23.692225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" event={"ID":"74d53f54-a284-45f0-ae81-5c25d2c5cbe1","Type":"ContainerStarted","Data":"5013f7b73d7cf4459af1e38dfcece75692a24c390e95f04395a54b6e2da22030"} Jan 26 12:56:23 crc kubenswrapper[4881]: I0126 12:56:23.693956 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" event={"ID":"517d3e74-cfe4-4e5e-96b0-0780042b0dbd","Type":"ContainerStarted","Data":"9d1e7d987e97aa7b6b4980b643de1ac520ef80ef1e1b990b655309c5f615e10c"} Jan 26 12:56:23 crc kubenswrapper[4881]: I0126 12:56:23.696625 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" event={"ID":"8cc0e35b-757a-46fc-bc17-f586426c9b82","Type":"ContainerStarted","Data":"35525a36c625cd32544c2ff783d79ae5abaf8639761885207e31dc8ee1abeba5"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.703742 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" event={"ID":"74d53f54-a284-45f0-ae81-5c25d2c5cbe1","Type":"ContainerStarted","Data":"2550f4c756518b767657f952590f518b18c4a83a1260da1e4d81d3699de8b37e"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.704002 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.710973 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67" event={"ID":"3451b01c-ed54-49be-ab3a-d8150976d2ec","Type":"ContainerStarted","Data":"40b9186dcf0a68b0f0b33c25aed960708eba0001ea617c3b0e01925bc95da4d7"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.711074 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.717158 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp" event={"ID":"de6b2c73-a5db-4333-91e1-7722f0ba1127","Type":"ContainerStarted","Data":"83f13c523a4a2ab8f599dbbd16677fe76197e7f520c12d2bce89f7a2134461ed"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.717289 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.722831 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss" event={"ID":"97b268cc-1863-494c-a47b-da0c52f76d39","Type":"ContainerStarted","Data":"d776b361e90a4a6eb04eaeb306261cde63ac6b1e92c7bc9fa3a60b7773dca368"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.722957 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.729547 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9" event={"ID":"ab3681e4-6e5f-4f8d-909d-8d7801366f54","Type":"ContainerStarted","Data":"665c19419b77ff23ffa860c4785c87300dabc22c6820b5fa150161936bb6a551"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.729672 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.734994 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48" event={"ID":"d5375dff-af5c-4de8-b52b-acf18edc4fb2","Type":"ContainerStarted","Data":"4d59fd6ab4145233f5e68645c93fe350377156b613ba6121f1d6d583c95493da"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.735458 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.745651 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" podStartSLOduration=32.74563788 podStartE2EDuration="32.74563788s" podCreationTimestamp="2026-01-26 12:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:56:24.738895965 +0000 UTC m=+1257.218205991" watchObservedRunningTime="2026-01-26 12:56:24.74563788 +0000 UTC m=+1257.224947896" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.747680 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" event={"ID":"cbafad55-0cc5-42d6-b721-b1f4e158251f","Type":"ContainerStarted","Data":"5b4cad13904e1bd3935cd19c7eee2dd8ff9c9134c7a6c48a65728309f7fd4d8b"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.748219 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.755887 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2" event={"ID":"4508aa9d-2a89-4976-bd36-dc918900371e","Type":"ContainerStarted","Data":"6f42dff7b46eaf0b16c06b5b29f1ceded12a0b5a09e1ec4f7b6f83b5bf1e2c8a"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.756491 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.774241 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss" podStartSLOduration=7.607819514 podStartE2EDuration="33.774226667s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.185494908 +0000 UTC m=+1225.664804924" lastFinishedPulling="2026-01-26 12:56:19.351902011 +0000 UTC m=+1251.831212077" observedRunningTime="2026-01-26 12:56:24.768709552 +0000 UTC m=+1257.248019578" watchObservedRunningTime="2026-01-26 12:56:24.774226667 +0000 UTC m=+1257.253536693" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.782987 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg" event={"ID":"c5cecd8b-813f-4bde-be28-371c54bcdfb9","Type":"ContainerStarted","Data":"d6cce20c4c411679cbffd68257b4089128a21831fa57bda54078491c18a0b9aa"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.783622 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.789952 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.789993 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.797712 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" event={"ID":"5b1abb90-faa0-4b72-9d20-f84ddf952245","Type":"ContainerStarted","Data":"16f63998c2f0f4d49df1cc045d1c973c0c0194f5628464c7710dd830b78067e6"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.798330 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.826651 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" event={"ID":"d998c88b-6b01-4e5f-bbab-a5aaee1a945b","Type":"ContainerStarted","Data":"dcccd9d27f670925dcf32ed53a07b6d051587984e39070e37d459a4e8885429e"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.827237 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.874489 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc" event={"ID":"e8b8ff3a-c099-4192-b061-33ff69fd2884","Type":"ContainerStarted","Data":"8840d502f7ba272d4c0705835170124b31aed9c0a4a87c007f15e7558a885f94"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.874548 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.875123 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp" podStartSLOduration=14.870519321 podStartE2EDuration="33.875113997s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.284801329 +0000 UTC m=+1225.764111355" lastFinishedPulling="2026-01-26 12:56:12.289395985 +0000 UTC m=+1244.768706031" observedRunningTime="2026-01-26 12:56:24.874724138 +0000 UTC m=+1257.354034164" watchObservedRunningTime="2026-01-26 12:56:24.875113997 +0000 UTC m=+1257.354424023" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.881560 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48" podStartSLOduration=13.887873319 podStartE2EDuration="33.881535954s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:52.297290027 +0000 UTC m=+1224.776600053" lastFinishedPulling="2026-01-26 12:56:12.290952662 +0000 UTC m=+1244.770262688" observedRunningTime="2026-01-26 12:56:24.834953239 +0000 UTC m=+1257.314263265" watchObservedRunningTime="2026-01-26 12:56:24.881535954 +0000 UTC m=+1257.360845980" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.895742 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq" event={"ID":"0e17a034-e3c9-434a-838f-8bfae6d010dd","Type":"ContainerStarted","Data":"d64daf8ab3a6750a23ee356b76eccd1450597dcbc1c35641f28bad7c1ae21cce"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.895881 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.906318 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc" event={"ID":"76b071ae-05bc-4142-9004-e5528d00c5cc","Type":"ContainerStarted","Data":"45620bb8f9d5bd349248ad8e3b1abd3908e97e02246a1de46888f4127bcc4cd7"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.906931 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.925224 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9" podStartSLOduration=7.111828225 podStartE2EDuration="32.925207129s" podCreationTimestamp="2026-01-26 12:55:52 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.541218673 +0000 UTC m=+1226.020528699" lastFinishedPulling="2026-01-26 12:56:19.354597527 +0000 UTC m=+1251.833907603" observedRunningTime="2026-01-26 12:56:24.922925464 +0000 UTC m=+1257.402235480" watchObservedRunningTime="2026-01-26 12:56:24.925207129 +0000 UTC m=+1257.404517155" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.926757 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" event={"ID":"78a91159-fead-4133-98e4-5dd587f6b274","Type":"ContainerStarted","Data":"a64f868acda7b011ffc0ce654976c7ba4a879fb8416a445029ba0715657098ab"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.927371 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.935362 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" event={"ID":"0591b1a9-0d5f-4f0a-beca-9ed62627012e","Type":"ContainerStarted","Data":"68d8bf6959df4755307b3f5f9a956831f9d8beabdd551afd3ca7d7287917149f"} Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.936134 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.940333 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67" podStartSLOduration=7.73481466 podStartE2EDuration="33.940320778s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.146408394 +0000 UTC m=+1225.625718420" lastFinishedPulling="2026-01-26 12:56:19.351914482 +0000 UTC m=+1251.831224538" observedRunningTime="2026-01-26 12:56:24.936414733 +0000 UTC m=+1257.415724759" watchObservedRunningTime="2026-01-26 12:56:24.940320778 +0000 UTC m=+1257.419630804" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.956220 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc" podStartSLOduration=14.894648101 podStartE2EDuration="33.956203825s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.227860681 +0000 UTC m=+1225.707170707" lastFinishedPulling="2026-01-26 12:56:12.289416405 +0000 UTC m=+1244.768726431" observedRunningTime="2026-01-26 12:56:24.955701003 +0000 UTC m=+1257.435011029" watchObservedRunningTime="2026-01-26 12:56:24.956203825 +0000 UTC m=+1257.435513851" Jan 26 12:56:24 crc kubenswrapper[4881]: I0126 12:56:24.977848 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" podStartSLOduration=3.20621906 podStartE2EDuration="33.977830013s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.003344885 +0000 UTC m=+1225.482654911" lastFinishedPulling="2026-01-26 12:56:23.774955838 +0000 UTC m=+1256.254265864" observedRunningTime="2026-01-26 12:56:24.975002474 +0000 UTC m=+1257.454312510" watchObservedRunningTime="2026-01-26 12:56:24.977830013 +0000 UTC m=+1257.457140039" Jan 26 12:56:25 crc kubenswrapper[4881]: I0126 12:56:25.023910 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" podStartSLOduration=3.162030913 podStartE2EDuration="34.023894096s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:52.913096155 +0000 UTC m=+1225.392406181" lastFinishedPulling="2026-01-26 12:56:23.774959328 +0000 UTC m=+1256.254269364" observedRunningTime="2026-01-26 12:56:25.017955591 +0000 UTC m=+1257.497265617" watchObservedRunningTime="2026-01-26 12:56:25.023894096 +0000 UTC m=+1257.503204122" Jan 26 12:56:25 crc kubenswrapper[4881]: I0126 12:56:25.048403 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" podStartSLOduration=3.7532569799999997 podStartE2EDuration="34.048386903s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.364947424 +0000 UTC m=+1225.844257450" lastFinishedPulling="2026-01-26 12:56:23.660077337 +0000 UTC m=+1256.139387373" observedRunningTime="2026-01-26 12:56:25.043945535 +0000 UTC m=+1257.523255561" watchObservedRunningTime="2026-01-26 12:56:25.048386903 +0000 UTC m=+1257.527696929" Jan 26 12:56:25 crc kubenswrapper[4881]: I0126 12:56:25.081231 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2" podStartSLOduration=7.3609262730000005 podStartE2EDuration="34.081212215s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:52.631534168 +0000 UTC m=+1225.110844194" lastFinishedPulling="2026-01-26 12:56:19.35182006 +0000 UTC m=+1251.831130136" observedRunningTime="2026-01-26 12:56:25.06629413 +0000 UTC m=+1257.545604156" watchObservedRunningTime="2026-01-26 12:56:25.081212215 +0000 UTC m=+1257.560522241" Jan 26 12:56:25 crc kubenswrapper[4881]: I0126 12:56:25.157157 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc" podStartSLOduration=7.441050417 podStartE2EDuration="34.157138996s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:52.636836467 +0000 UTC m=+1225.116146493" lastFinishedPulling="2026-01-26 12:56:19.352925006 +0000 UTC m=+1251.832235072" observedRunningTime="2026-01-26 12:56:25.11794137 +0000 UTC m=+1257.597251396" watchObservedRunningTime="2026-01-26 12:56:25.157138996 +0000 UTC m=+1257.636449022" Jan 26 12:56:25 crc kubenswrapper[4881]: I0126 12:56:25.165631 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg" podStartSLOduration=7.16723291 podStartE2EDuration="34.165614072s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:52.355960078 +0000 UTC m=+1224.835270104" lastFinishedPulling="2026-01-26 12:56:19.35434121 +0000 UTC m=+1251.833651266" observedRunningTime="2026-01-26 12:56:25.153686321 +0000 UTC m=+1257.632996347" watchObservedRunningTime="2026-01-26 12:56:25.165614072 +0000 UTC m=+1257.644924098" Jan 26 12:56:25 crc kubenswrapper[4881]: I0126 12:56:25.179113 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" podStartSLOduration=4.027718795 podStartE2EDuration="34.179098852s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.441354058 +0000 UTC m=+1225.920664084" lastFinishedPulling="2026-01-26 12:56:23.592734115 +0000 UTC m=+1256.072044141" observedRunningTime="2026-01-26 12:56:25.177850961 +0000 UTC m=+1257.657160987" watchObservedRunningTime="2026-01-26 12:56:25.179098852 +0000 UTC m=+1257.658408878" Jan 26 12:56:25 crc kubenswrapper[4881]: I0126 12:56:25.201315 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" podStartSLOduration=3.372448634 podStartE2EDuration="34.201299892s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.003392677 +0000 UTC m=+1225.482702703" lastFinishedPulling="2026-01-26 12:56:23.832243935 +0000 UTC m=+1256.311553961" observedRunningTime="2026-01-26 12:56:25.197852239 +0000 UTC m=+1257.677162265" watchObservedRunningTime="2026-01-26 12:56:25.201299892 +0000 UTC m=+1257.680609918" Jan 26 12:56:25 crc kubenswrapper[4881]: I0126 12:56:25.220357 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq" podStartSLOduration=14.552364183 podStartE2EDuration="34.220338987s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:52.621442941 +0000 UTC m=+1225.100752967" lastFinishedPulling="2026-01-26 12:56:12.289417755 +0000 UTC m=+1244.768727771" observedRunningTime="2026-01-26 12:56:25.214054574 +0000 UTC m=+1257.693364600" watchObservedRunningTime="2026-01-26 12:56:25.220338987 +0000 UTC m=+1257.699649013" Jan 26 12:56:25 crc kubenswrapper[4881]: I0126 12:56:25.942140 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" event={"ID":"b6807e2b-25b9-4802-8086-2c6eab9ff308","Type":"ContainerStarted","Data":"2a1abbc09160bea7ec591fa6ffda4e29e3e07ac05c111d0cf8df22517871abbb"} Jan 26 12:56:25 crc kubenswrapper[4881]: I0126 12:56:25.961619 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" podStartSLOduration=3.37141244 podStartE2EDuration="34.961597164s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:52.951224815 +0000 UTC m=+1225.430534841" lastFinishedPulling="2026-01-26 12:56:24.541409539 +0000 UTC m=+1257.020719565" observedRunningTime="2026-01-26 12:56:25.960022766 +0000 UTC m=+1258.439332792" watchObservedRunningTime="2026-01-26 12:56:25.961597164 +0000 UTC m=+1258.440907200" Jan 26 12:56:28 crc kubenswrapper[4881]: I0126 12:56:28.972094 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" event={"ID":"0a7aea9c-0f85-45d1-9c90-e06acb42f500","Type":"ContainerStarted","Data":"86087a8eda877edde246fadf7ce1471e1b7ec41cb2828a4ba8155ea99d7a243a"} Jan 26 12:56:28 crc kubenswrapper[4881]: I0126 12:56:28.974925 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" Jan 26 12:56:28 crc kubenswrapper[4881]: I0126 12:56:28.975795 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" event={"ID":"d808c58e-a8df-4cbd-aee6-d87edd677e94","Type":"ContainerStarted","Data":"aa2c699b6bd1be66afb33761af05bdd173509f5b37ffbde1dc9dbf01e6754675"} Jan 26 12:56:28 crc kubenswrapper[4881]: I0126 12:56:28.976024 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" Jan 26 12:56:28 crc kubenswrapper[4881]: I0126 12:56:28.978240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" event={"ID":"517d3e74-cfe4-4e5e-96b0-0780042b0dbd","Type":"ContainerStarted","Data":"73f9913299e85588a9b4d52b156ef34da2427973280e16c6310a15850c0d8ecc"} Jan 26 12:56:28 crc kubenswrapper[4881]: I0126 12:56:28.978362 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:56:28 crc kubenswrapper[4881]: I0126 12:56:28.979701 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" event={"ID":"8cc0e35b-757a-46fc-bc17-f586426c9b82","Type":"ContainerStarted","Data":"4b167ea4955d3560158c7f5e91d7c04258f1bd9d8e5e67f0b964e9835fb905b9"} Jan 26 12:56:28 crc kubenswrapper[4881]: I0126 12:56:28.979859 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:56:28 crc kubenswrapper[4881]: I0126 12:56:28.994395 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" podStartSLOduration=3.591363664 podStartE2EDuration="37.994361176s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.364483863 +0000 UTC m=+1225.843793889" lastFinishedPulling="2026-01-26 12:56:27.767481345 +0000 UTC m=+1260.246791401" observedRunningTime="2026-01-26 12:56:28.989654281 +0000 UTC m=+1261.468964347" watchObservedRunningTime="2026-01-26 12:56:28.994361176 +0000 UTC m=+1261.473671242" Jan 26 12:56:29 crc kubenswrapper[4881]: I0126 12:56:29.052847 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" podStartSLOduration=33.181123844 podStartE2EDuration="38.052826571s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:56:22.896758062 +0000 UTC m=+1255.376068088" lastFinishedPulling="2026-01-26 12:56:27.768460789 +0000 UTC m=+1260.247770815" observedRunningTime="2026-01-26 12:56:29.040976443 +0000 UTC m=+1261.520286479" watchObservedRunningTime="2026-01-26 12:56:29.052826571 +0000 UTC m=+1261.532136607" Jan 26 12:56:29 crc kubenswrapper[4881]: I0126 12:56:29.073885 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" podStartSLOduration=3.576688085 podStartE2EDuration="38.073864054s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.364163245 +0000 UTC m=+1225.843473271" lastFinishedPulling="2026-01-26 12:56:27.861339174 +0000 UTC m=+1260.340649240" observedRunningTime="2026-01-26 12:56:29.069489448 +0000 UTC m=+1261.548799484" watchObservedRunningTime="2026-01-26 12:56:29.073864054 +0000 UTC m=+1261.553174090" Jan 26 12:56:29 crc kubenswrapper[4881]: I0126 12:56:29.094620 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" podStartSLOduration=33.982538808 podStartE2EDuration="38.09460288s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:56:23.656826027 +0000 UTC m=+1256.136136063" lastFinishedPulling="2026-01-26 12:56:27.768890099 +0000 UTC m=+1260.248200135" observedRunningTime="2026-01-26 12:56:29.093000821 +0000 UTC m=+1261.572310887" watchObservedRunningTime="2026-01-26 12:56:29.09460288 +0000 UTC m=+1261.573912906" Jan 26 12:56:31 crc kubenswrapper[4881]: I0126 12:56:31.747224 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-w2n48" Jan 26 12:56:31 crc kubenswrapper[4881]: I0126 12:56:31.770810 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-lgljg" Jan 26 12:56:31 crc kubenswrapper[4881]: I0126 12:56:31.784044 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hs8xc" Jan 26 12:56:31 crc kubenswrapper[4881]: I0126 12:56:31.813237 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-sgnd2" Jan 26 12:56:31 crc kubenswrapper[4881]: I0126 12:56:31.828418 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v95fq" Jan 26 12:56:31 crc kubenswrapper[4881]: I0126 12:56:31.838144 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rww6v" Jan 26 12:56:31 crc kubenswrapper[4881]: I0126 12:56:31.929981 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-flv4v" Jan 26 12:56:32 crc kubenswrapper[4881]: I0126 12:56:32.040567 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" Jan 26 12:56:32 crc kubenswrapper[4881]: I0126 12:56:32.042126 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-66d48" Jan 26 12:56:32 crc kubenswrapper[4881]: I0126 12:56:32.053006 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-sc6f8" Jan 26 12:56:32 crc kubenswrapper[4881]: I0126 12:56:32.134175 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-zkhss" Jan 26 12:56:32 crc kubenswrapper[4881]: I0126 12:56:32.141418 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m8vjc" Jan 26 12:56:32 crc kubenswrapper[4881]: I0126 12:56:32.210945 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-r2p67" Jan 26 12:56:32 crc kubenswrapper[4881]: I0126 12:56:32.268356 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-pz264" Jan 26 12:56:32 crc kubenswrapper[4881]: I0126 12:56:32.326963 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-6wtsp" Jan 26 12:56:32 crc kubenswrapper[4881]: I0126 12:56:32.528603 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ff2c4" Jan 26 12:56:32 crc kubenswrapper[4881]: I0126 12:56:32.634138 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5784f86c76-zbvz9" Jan 26 12:56:37 crc kubenswrapper[4881]: I0126 12:56:37.780919 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-wkhcm" Jan 26 12:56:38 crc kubenswrapper[4881]: E0126 12:56:38.092229 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" podUID="973ffd61-1f3c-4e2f-9315-dae216499f96" Jan 26 12:56:38 crc kubenswrapper[4881]: E0126 12:56:38.092723 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" podUID="a5f220e0-8c4f-4915-b0d0-cb85cc7f7850" Jan 26 12:56:38 crc kubenswrapper[4881]: I0126 12:56:38.139992 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854jj499" Jan 26 12:56:38 crc kubenswrapper[4881]: I0126 12:56:38.425879 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-649ccf9654-zlvc6" Jan 26 12:56:42 crc kubenswrapper[4881]: I0126 12:56:42.211973 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-r7dwj" Jan 26 12:56:42 crc kubenswrapper[4881]: I0126 12:56:42.355273 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zxw9s" Jan 26 12:56:52 crc kubenswrapper[4881]: I0126 12:56:52.173198 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" event={"ID":"a5f220e0-8c4f-4915-b0d0-cb85cc7f7850","Type":"ContainerStarted","Data":"0d0114db6acb1d8c6a0e5a542b9d60c0118b71b39b2d1b5b347625a3891373bb"} Jan 26 12:56:52 crc kubenswrapper[4881]: I0126 12:56:52.175056 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" event={"ID":"973ffd61-1f3c-4e2f-9315-dae216499f96","Type":"ContainerStarted","Data":"67463c0fd04c47649935e65966eef4c904d577f64919542665a5baa6e9ee5714"} Jan 26 12:56:52 crc kubenswrapper[4881]: I0126 12:56:52.175280 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" Jan 26 12:56:52 crc kubenswrapper[4881]: I0126 12:56:52.193543 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgjn4" podStartSLOduration=2.162556637 podStartE2EDuration="1m0.193529185s" podCreationTimestamp="2026-01-26 12:55:52 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.568185701 +0000 UTC m=+1226.047495727" lastFinishedPulling="2026-01-26 12:56:51.599158209 +0000 UTC m=+1284.078468275" observedRunningTime="2026-01-26 12:56:52.189424385 +0000 UTC m=+1284.668734411" watchObservedRunningTime="2026-01-26 12:56:52.193529185 +0000 UTC m=+1284.672839211" Jan 26 12:56:52 crc kubenswrapper[4881]: I0126 12:56:52.212264 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" podStartSLOduration=2.838322978 podStartE2EDuration="1m1.212236091s" podCreationTimestamp="2026-01-26 12:55:51 +0000 UTC" firstStartedPulling="2026-01-26 12:55:53.364746179 +0000 UTC m=+1225.844056205" lastFinishedPulling="2026-01-26 12:56:51.738659252 +0000 UTC m=+1284.217969318" observedRunningTime="2026-01-26 12:56:52.21096206 +0000 UTC m=+1284.690272126" watchObservedRunningTime="2026-01-26 12:56:52.212236091 +0000 UTC m=+1284.691546157" Jan 26 12:56:54 crc kubenswrapper[4881]: I0126 12:56:54.789288 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:56:54 crc kubenswrapper[4881]: I0126 12:56:54.789775 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:56:54 crc kubenswrapper[4881]: I0126 12:56:54.789837 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 12:56:54 crc kubenswrapper[4881]: I0126 12:56:54.790652 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"765ce5b5bd8e274c5be74ed55cfedb20d522de35e2e3c381e48dd1db3daac475"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:56:54 crc kubenswrapper[4881]: I0126 12:56:54.790707 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://765ce5b5bd8e274c5be74ed55cfedb20d522de35e2e3c381e48dd1db3daac475" gracePeriod=600 Jan 26 12:56:55 crc kubenswrapper[4881]: E0126 12:56:55.990044 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebe5a5ae_b195_4f35_bcee_d9fe6d27dd19.slice/crio-conmon-765ce5b5bd8e274c5be74ed55cfedb20d522de35e2e3c381e48dd1db3daac475.scope\": RecentStats: unable to find data in memory cache]" Jan 26 12:56:56 crc kubenswrapper[4881]: I0126 12:56:56.205142 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="765ce5b5bd8e274c5be74ed55cfedb20d522de35e2e3c381e48dd1db3daac475" exitCode=0 Jan 26 12:56:56 crc kubenswrapper[4881]: I0126 12:56:56.205212 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"765ce5b5bd8e274c5be74ed55cfedb20d522de35e2e3c381e48dd1db3daac475"} Jan 26 12:56:56 crc kubenswrapper[4881]: I0126 12:56:56.206062 4881 scope.go:117] "RemoveContainer" containerID="4632219e9eec673e5473a279c2fc2f5646ce521828ec90d160527916fe6cbc92" Jan 26 12:56:57 crc kubenswrapper[4881]: I0126 12:56:57.213839 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"7c058347a35682b737f4fed8273f3335b15404e9e17abbca4140d7f0cbd3f241"} Jan 26 12:57:02 crc kubenswrapper[4881]: I0126 12:57:02.471320 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sqqcs" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.762139 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-8zm5z"] Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.765556 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.767681 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.768116 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.769548 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.769697 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-xgfzk" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.788145 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-8zm5z"] Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.858665 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-bdwrb"] Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.859706 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.862063 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.877092 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-bdwrb"] Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.947754 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-dns-svc\") pod \"dnsmasq-dns-6dd95798b9-bdwrb\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.947834 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswsg\" (UniqueName: \"kubernetes.io/projected/a3bc3df9-3d47-4305-99fe-494c5533b700-kube-api-access-wswsg\") pod \"dnsmasq-dns-9cd4f5bf5-8zm5z\" (UID: \"a3bc3df9-3d47-4305-99fe-494c5533b700\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.947902 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bc3df9-3d47-4305-99fe-494c5533b700-config\") pod \"dnsmasq-dns-9cd4f5bf5-8zm5z\" (UID: \"a3bc3df9-3d47-4305-99fe-494c5533b700\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.947943 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6w2\" (UniqueName: \"kubernetes.io/projected/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-kube-api-access-lj6w2\") pod \"dnsmasq-dns-6dd95798b9-bdwrb\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:21 crc kubenswrapper[4881]: I0126 12:57:21.947969 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-config\") pod \"dnsmasq-dns-6dd95798b9-bdwrb\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.048836 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-config\") pod \"dnsmasq-dns-6dd95798b9-bdwrb\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.048920 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-dns-svc\") pod \"dnsmasq-dns-6dd95798b9-bdwrb\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.048975 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wswsg\" (UniqueName: \"kubernetes.io/projected/a3bc3df9-3d47-4305-99fe-494c5533b700-kube-api-access-wswsg\") pod \"dnsmasq-dns-9cd4f5bf5-8zm5z\" (UID: \"a3bc3df9-3d47-4305-99fe-494c5533b700\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.049007 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bc3df9-3d47-4305-99fe-494c5533b700-config\") pod \"dnsmasq-dns-9cd4f5bf5-8zm5z\" (UID: \"a3bc3df9-3d47-4305-99fe-494c5533b700\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.049036 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6w2\" (UniqueName: \"kubernetes.io/projected/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-kube-api-access-lj6w2\") pod \"dnsmasq-dns-6dd95798b9-bdwrb\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.049733 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bc3df9-3d47-4305-99fe-494c5533b700-config\") pod \"dnsmasq-dns-9cd4f5bf5-8zm5z\" (UID: \"a3bc3df9-3d47-4305-99fe-494c5533b700\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.050110 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-config\") pod \"dnsmasq-dns-6dd95798b9-bdwrb\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.050229 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-dns-svc\") pod \"dnsmasq-dns-6dd95798b9-bdwrb\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.066026 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6w2\" (UniqueName: \"kubernetes.io/projected/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-kube-api-access-lj6w2\") pod \"dnsmasq-dns-6dd95798b9-bdwrb\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.066042 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wswsg\" (UniqueName: \"kubernetes.io/projected/a3bc3df9-3d47-4305-99fe-494c5533b700-kube-api-access-wswsg\") pod \"dnsmasq-dns-9cd4f5bf5-8zm5z\" (UID: \"a3bc3df9-3d47-4305-99fe-494c5533b700\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.130272 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.178021 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.606475 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-8zm5z"] Jan 26 12:57:22 crc kubenswrapper[4881]: I0126 12:57:22.692485 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-bdwrb"] Jan 26 12:57:22 crc kubenswrapper[4881]: W0126 12:57:22.698123 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5777be6_3c36_45f2_9caa_eb1ad9da50ce.slice/crio-2b886e3a967bedaba982f8b5500ac0d1e2935f44972affea4b10827f3e60c158 WatchSource:0}: Error finding container 2b886e3a967bedaba982f8b5500ac0d1e2935f44972affea4b10827f3e60c158: Status 404 returned error can't find the container with id 2b886e3a967bedaba982f8b5500ac0d1e2935f44972affea4b10827f3e60c158 Jan 26 12:57:23 crc kubenswrapper[4881]: I0126 12:57:23.442118 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" event={"ID":"a3bc3df9-3d47-4305-99fe-494c5533b700","Type":"ContainerStarted","Data":"155ac39d629b63a77649bb5543574ee605bbd67f1e1987dce6a3b5f1b6b810af"} Jan 26 12:57:23 crc kubenswrapper[4881]: I0126 12:57:23.443160 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" event={"ID":"f5777be6-3c36-45f2-9caa-eb1ad9da50ce","Type":"ContainerStarted","Data":"2b886e3a967bedaba982f8b5500ac0d1e2935f44972affea4b10827f3e60c158"} Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.400124 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-bdwrb"] Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.426342 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-vmx2w"] Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.427728 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.447085 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-vmx2w"] Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.597160 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-config\") pod \"dnsmasq-dns-64594fd94f-vmx2w\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.597254 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2czwj\" (UniqueName: \"kubernetes.io/projected/98f227bf-5140-4aac-b539-58c2a1fca65d-kube-api-access-2czwj\") pod \"dnsmasq-dns-64594fd94f-vmx2w\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.597290 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-dns-svc\") pod \"dnsmasq-dns-64594fd94f-vmx2w\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.698344 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-config\") pod \"dnsmasq-dns-64594fd94f-vmx2w\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.698460 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2czwj\" (UniqueName: \"kubernetes.io/projected/98f227bf-5140-4aac-b539-58c2a1fca65d-kube-api-access-2czwj\") pod \"dnsmasq-dns-64594fd94f-vmx2w\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.698503 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-dns-svc\") pod \"dnsmasq-dns-64594fd94f-vmx2w\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.699447 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-dns-svc\") pod \"dnsmasq-dns-64594fd94f-vmx2w\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.700017 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-config\") pod \"dnsmasq-dns-64594fd94f-vmx2w\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.713745 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-8zm5z"] Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.739475 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2czwj\" (UniqueName: \"kubernetes.io/projected/98f227bf-5140-4aac-b539-58c2a1fca65d-kube-api-access-2czwj\") pod \"dnsmasq-dns-64594fd94f-vmx2w\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.742776 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-t8gtc"] Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.743873 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.749130 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.757665 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-t8gtc"] Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.900866 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-dns-svc\") pod \"dnsmasq-dns-7d56d856cf-t8gtc\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.901007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47zmx\" (UniqueName: \"kubernetes.io/projected/a01decda-1236-44fa-a384-7fdedc2cc279-kube-api-access-47zmx\") pod \"dnsmasq-dns-7d56d856cf-t8gtc\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:25 crc kubenswrapper[4881]: I0126 12:57:25.901314 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-config\") pod \"dnsmasq-dns-7d56d856cf-t8gtc\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.005285 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-dns-svc\") pod \"dnsmasq-dns-7d56d856cf-t8gtc\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.005359 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47zmx\" (UniqueName: \"kubernetes.io/projected/a01decda-1236-44fa-a384-7fdedc2cc279-kube-api-access-47zmx\") pod \"dnsmasq-dns-7d56d856cf-t8gtc\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.005418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-config\") pod \"dnsmasq-dns-7d56d856cf-t8gtc\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.006131 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-dns-svc\") pod \"dnsmasq-dns-7d56d856cf-t8gtc\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.010498 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-config\") pod \"dnsmasq-dns-7d56d856cf-t8gtc\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.024131 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-vmx2w"] Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.027529 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47zmx\" (UniqueName: \"kubernetes.io/projected/a01decda-1236-44fa-a384-7fdedc2cc279-kube-api-access-47zmx\") pod \"dnsmasq-dns-7d56d856cf-t8gtc\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.046577 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57467f675c-tz7zs"] Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.047659 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.061878 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.064543 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57467f675c-tz7zs"] Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.209039 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-config\") pod \"dnsmasq-dns-57467f675c-tz7zs\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.209181 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-dns-svc\") pod \"dnsmasq-dns-57467f675c-tz7zs\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.209252 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gms7m\" (UniqueName: \"kubernetes.io/projected/fd528ad5-ee88-4a39-b948-6364fa84fbe9-kube-api-access-gms7m\") pod \"dnsmasq-dns-57467f675c-tz7zs\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.310987 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gms7m\" (UniqueName: \"kubernetes.io/projected/fd528ad5-ee88-4a39-b948-6364fa84fbe9-kube-api-access-gms7m\") pod \"dnsmasq-dns-57467f675c-tz7zs\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.311063 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-config\") pod \"dnsmasq-dns-57467f675c-tz7zs\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.311184 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-dns-svc\") pod \"dnsmasq-dns-57467f675c-tz7zs\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.312101 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-dns-svc\") pod \"dnsmasq-dns-57467f675c-tz7zs\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.312180 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-config\") pod \"dnsmasq-dns-57467f675c-tz7zs\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.333393 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gms7m\" (UniqueName: \"kubernetes.io/projected/fd528ad5-ee88-4a39-b948-6364fa84fbe9-kube-api-access-gms7m\") pod \"dnsmasq-dns-57467f675c-tz7zs\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.373811 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.567177 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.573416 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.574993 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.576236 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.576265 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.576417 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.576441 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.576648 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.576709 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.604864 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q7rk9" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715400 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jglfb\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-kube-api-access-jglfb\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715448 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a455dd78-e351-449c-903a-5c0e0c50faf5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715476 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715560 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715603 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715634 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715666 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715689 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715711 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715732 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a455dd78-e351-449c-903a-5c0e0c50faf5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.715770 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-config-data\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817487 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817546 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817585 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a455dd78-e351-449c-903a-5c0e0c50faf5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817653 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-config-data\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817675 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jglfb\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-kube-api-access-jglfb\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817692 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a455dd78-e351-449c-903a-5c0e0c50faf5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817711 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817729 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817751 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.817774 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.818251 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.818679 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.819741 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.820223 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.821997 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-config-data\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.823207 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.824633 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a455dd78-e351-449c-903a-5c0e0c50faf5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.824905 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.826007 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a455dd78-e351-449c-903a-5c0e0c50faf5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.831957 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.848147 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jglfb\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-kube-api-access-jglfb\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.867953 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.870054 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.874813 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.875648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " pod="openstack/rabbitmq-server-0" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.875798 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r8cwx" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.875993 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.876045 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.876154 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.876245 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.876244 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.876338 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 12:57:26 crc kubenswrapper[4881]: I0126 12:57:26.933209 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.020781 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.020840 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.020879 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.020899 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.021060 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.021183 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.021227 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.021390 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.021457 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b22mj\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-kube-api-access-b22mj\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.021605 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.021657 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123293 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123350 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123389 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123409 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123451 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123487 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123526 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123552 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123571 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123599 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.123619 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b22mj\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-kube-api-access-b22mj\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.124050 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.124667 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.125073 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.126249 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.126478 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.127338 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.128039 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.128143 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.129385 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.145563 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.145601 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b22mj\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-kube-api-access-b22mj\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.146388 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.162164 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.163471 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.166758 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.166809 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.166940 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.166950 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.166837 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-p4n5b" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.167124 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.175663 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.180097 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.227123 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326145 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ab9a358b-8713-4790-a9c4-97b89efcc88f-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326210 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64nml\" (UniqueName: \"kubernetes.io/projected/ab9a358b-8713-4790-a9c4-97b89efcc88f-kube-api-access-64nml\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326241 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ab9a358b-8713-4790-a9c4-97b89efcc88f-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326270 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326297 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326332 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326369 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab9a358b-8713-4790-a9c4-97b89efcc88f-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326402 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326447 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326474 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ab9a358b-8713-4790-a9c4-97b89efcc88f-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.326503 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ab9a358b-8713-4790-a9c4-97b89efcc88f-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.428487 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab9a358b-8713-4790-a9c4-97b89efcc88f-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.429312 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.429928 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.429958 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ab9a358b-8713-4790-a9c4-97b89efcc88f-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.429984 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ab9a358b-8713-4790-a9c4-97b89efcc88f-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.430004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ab9a358b-8713-4790-a9c4-97b89efcc88f-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.430027 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64nml\" (UniqueName: \"kubernetes.io/projected/ab9a358b-8713-4790-a9c4-97b89efcc88f-kube-api-access-64nml\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.430051 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ab9a358b-8713-4790-a9c4-97b89efcc88f-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.430071 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.430091 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.430154 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.430417 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.429254 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab9a358b-8713-4790-a9c4-97b89efcc88f-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.430654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.431479 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ab9a358b-8713-4790-a9c4-97b89efcc88f-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.433438 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ab9a358b-8713-4790-a9c4-97b89efcc88f-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.434024 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.434101 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.438999 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ab9a358b-8713-4790-a9c4-97b89efcc88f-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.441747 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ab9a358b-8713-4790-a9c4-97b89efcc88f-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.442111 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ab9a358b-8713-4790-a9c4-97b89efcc88f-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.454269 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64nml\" (UniqueName: \"kubernetes.io/projected/ab9a358b-8713-4790-a9c4-97b89efcc88f-kube-api-access-64nml\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.460393 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"ab9a358b-8713-4790-a9c4-97b89efcc88f\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:27 crc kubenswrapper[4881]: I0126 12:57:27.520731 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.535207 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.536570 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.539248 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-946w4" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.544470 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.545236 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.545708 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.552156 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.559433 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.647298 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed51d8-b401-412f-925e-0cff27777e55-operator-scripts\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.647605 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32ed51d8-b401-412f-925e-0cff27777e55-config-data-generated\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.647626 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmvm\" (UniqueName: \"kubernetes.io/projected/32ed51d8-b401-412f-925e-0cff27777e55-kube-api-access-rcmvm\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.647647 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32ed51d8-b401-412f-925e-0cff27777e55-kolla-config\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.647661 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32ed51d8-b401-412f-925e-0cff27777e55-config-data-default\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.647694 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed51d8-b401-412f-925e-0cff27777e55-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.647729 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.647929 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ed51d8-b401-412f-925e-0cff27777e55-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.749262 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed51d8-b401-412f-925e-0cff27777e55-operator-scripts\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.749336 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32ed51d8-b401-412f-925e-0cff27777e55-config-data-generated\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.749355 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmvm\" (UniqueName: \"kubernetes.io/projected/32ed51d8-b401-412f-925e-0cff27777e55-kube-api-access-rcmvm\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.749375 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32ed51d8-b401-412f-925e-0cff27777e55-kolla-config\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.749391 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32ed51d8-b401-412f-925e-0cff27777e55-config-data-default\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.749433 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed51d8-b401-412f-925e-0cff27777e55-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.749455 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.749492 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ed51d8-b401-412f-925e-0cff27777e55-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.749815 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32ed51d8-b401-412f-925e-0cff27777e55-config-data-generated\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.750157 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.750572 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32ed51d8-b401-412f-925e-0cff27777e55-config-data-default\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.750740 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32ed51d8-b401-412f-925e-0cff27777e55-kolla-config\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.750876 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed51d8-b401-412f-925e-0cff27777e55-operator-scripts\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.755771 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed51d8-b401-412f-925e-0cff27777e55-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.756417 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ed51d8-b401-412f-925e-0cff27777e55-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.767167 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmvm\" (UniqueName: \"kubernetes.io/projected/32ed51d8-b401-412f-925e-0cff27777e55-kube-api-access-rcmvm\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.782697 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"32ed51d8-b401-412f-925e-0cff27777e55\") " pod="openstack/openstack-galera-0" Jan 26 12:57:28 crc kubenswrapper[4881]: I0126 12:57:28.859025 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 12:57:29 crc kubenswrapper[4881]: I0126 12:57:29.929601 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 12:57:29 crc kubenswrapper[4881]: I0126 12:57:29.930929 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:29 crc kubenswrapper[4881]: I0126 12:57:29.934955 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jj6d6" Jan 26 12:57:29 crc kubenswrapper[4881]: I0126 12:57:29.935938 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 12:57:29 crc kubenswrapper[4881]: I0126 12:57:29.935994 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 12:57:29 crc kubenswrapper[4881]: I0126 12:57:29.937170 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 12:57:29 crc kubenswrapper[4881]: I0126 12:57:29.941680 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.077253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.077315 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8b6753b-d929-47d6-84ec-b72094efad83-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.077339 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b6753b-d929-47d6-84ec-b72094efad83-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.077356 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8b6753b-d929-47d6-84ec-b72094efad83-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.077374 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b6753b-d929-47d6-84ec-b72094efad83-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.077504 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8b6753b-d929-47d6-84ec-b72094efad83-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.077634 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b6753b-d929-47d6-84ec-b72094efad83-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.077716 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tztd\" (UniqueName: \"kubernetes.io/projected/c8b6753b-d929-47d6-84ec-b72094efad83-kube-api-access-2tztd\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.179213 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b6753b-d929-47d6-84ec-b72094efad83-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.179273 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tztd\" (UniqueName: \"kubernetes.io/projected/c8b6753b-d929-47d6-84ec-b72094efad83-kube-api-access-2tztd\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.179296 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.179314 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8b6753b-d929-47d6-84ec-b72094efad83-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.179331 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b6753b-d929-47d6-84ec-b72094efad83-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.179353 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8b6753b-d929-47d6-84ec-b72094efad83-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.179369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b6753b-d929-47d6-84ec-b72094efad83-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.179452 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8b6753b-d929-47d6-84ec-b72094efad83-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.179850 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8b6753b-d929-47d6-84ec-b72094efad83-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.180931 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8b6753b-d929-47d6-84ec-b72094efad83-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.181376 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.183134 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8b6753b-d929-47d6-84ec-b72094efad83-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.184217 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b6753b-d929-47d6-84ec-b72094efad83-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.210270 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b6753b-d929-47d6-84ec-b72094efad83-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.211103 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b6753b-d929-47d6-84ec-b72094efad83-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.217485 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tztd\" (UniqueName: \"kubernetes.io/projected/c8b6753b-d929-47d6-84ec-b72094efad83-kube-api-access-2tztd\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.236726 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c8b6753b-d929-47d6-84ec-b72094efad83\") " pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.262345 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.271729 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.272724 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.275665 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.275724 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.278793 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-w769j" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.286390 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.382476 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6421b4d-2505-47c4-899d-7f7bd2113cf8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.382566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6421b4d-2505-47c4-899d-7f7bd2113cf8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.382712 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6421b4d-2505-47c4-899d-7f7bd2113cf8-config-data\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.382832 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7tzq\" (UniqueName: \"kubernetes.io/projected/a6421b4d-2505-47c4-899d-7f7bd2113cf8-kube-api-access-h7tzq\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.382926 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6421b4d-2505-47c4-899d-7f7bd2113cf8-kolla-config\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.484348 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6421b4d-2505-47c4-899d-7f7bd2113cf8-kolla-config\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.484443 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6421b4d-2505-47c4-899d-7f7bd2113cf8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.484481 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6421b4d-2505-47c4-899d-7f7bd2113cf8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.484589 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6421b4d-2505-47c4-899d-7f7bd2113cf8-config-data\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.484636 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7tzq\" (UniqueName: \"kubernetes.io/projected/a6421b4d-2505-47c4-899d-7f7bd2113cf8-kube-api-access-h7tzq\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.485214 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6421b4d-2505-47c4-899d-7f7bd2113cf8-kolla-config\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.485784 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6421b4d-2505-47c4-899d-7f7bd2113cf8-config-data\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.488065 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6421b4d-2505-47c4-899d-7f7bd2113cf8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.492976 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6421b4d-2505-47c4-899d-7f7bd2113cf8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.502303 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7tzq\" (UniqueName: \"kubernetes.io/projected/a6421b4d-2505-47c4-899d-7f7bd2113cf8-kube-api-access-h7tzq\") pod \"memcached-0\" (UID: \"a6421b4d-2505-47c4-899d-7f7bd2113cf8\") " pod="openstack/memcached-0" Jan 26 12:57:30 crc kubenswrapper[4881]: I0126 12:57:30.621555 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 12:57:32 crc kubenswrapper[4881]: I0126 12:57:32.258261 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 12:57:32 crc kubenswrapper[4881]: I0126 12:57:32.259224 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 12:57:32 crc kubenswrapper[4881]: I0126 12:57:32.261446 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-g4pl6" Jan 26 12:57:32 crc kubenswrapper[4881]: I0126 12:57:32.270954 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 12:57:32 crc kubenswrapper[4881]: I0126 12:57:32.311293 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzqq4\" (UniqueName: \"kubernetes.io/projected/95617a83-815e-4e5d-9b7e-4d3bec591ed8-kube-api-access-gzqq4\") pod \"kube-state-metrics-0\" (UID: \"95617a83-815e-4e5d-9b7e-4d3bec591ed8\") " pod="openstack/kube-state-metrics-0" Jan 26 12:57:32 crc kubenswrapper[4881]: I0126 12:57:32.413072 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzqq4\" (UniqueName: \"kubernetes.io/projected/95617a83-815e-4e5d-9b7e-4d3bec591ed8-kube-api-access-gzqq4\") pod \"kube-state-metrics-0\" (UID: \"95617a83-815e-4e5d-9b7e-4d3bec591ed8\") " pod="openstack/kube-state-metrics-0" Jan 26 12:57:32 crc kubenswrapper[4881]: I0126 12:57:32.447525 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzqq4\" (UniqueName: \"kubernetes.io/projected/95617a83-815e-4e5d-9b7e-4d3bec591ed8-kube-api-access-gzqq4\") pod \"kube-state-metrics-0\" (UID: \"95617a83-815e-4e5d-9b7e-4d3bec591ed8\") " pod="openstack/kube-state-metrics-0" Jan 26 12:57:32 crc kubenswrapper[4881]: I0126 12:57:32.587099 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.650815 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.677371 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.684699 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.684933 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.685539 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.685673 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.687815 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.688225 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.693639 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2qlwl" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.696765 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.702353 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.735750 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-config\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.735893 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5djds\" (UniqueName: \"kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-kube-api-access-5djds\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.735944 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.735988 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.736057 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1067dd91-d79f-4165-8c6e-e3309dff7d26-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.736099 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.736186 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.736227 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.736671 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.736752 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.837915 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-config\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.837955 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5djds\" (UniqueName: \"kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-kube-api-access-5djds\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.837973 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.837998 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.838035 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1067dd91-d79f-4165-8c6e-e3309dff7d26-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.838097 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.838209 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.838232 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.838271 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.838296 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.839096 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.839579 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.840282 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.840857 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.840885 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/68d2c87ef14797ce11fba4e65263a740afb8b7e8fd7775f7168ab753beb0af09/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.843424 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1067dd91-d79f-4165-8c6e-e3309dff7d26-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.844325 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.846215 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.847281 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.850204 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-config\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.854592 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5djds\" (UniqueName: \"kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-kube-api-access-5djds\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:33 crc kubenswrapper[4881]: I0126 12:57:33.871420 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:34 crc kubenswrapper[4881]: I0126 12:57:34.037084 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.944020 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vs5xn"] Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.945043 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.948248 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.948738 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.948889 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hqkkf" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.959414 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vs5xn"] Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.992330 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-var-log-ovn\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.992394 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-scripts\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.992425 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8867\" (UniqueName: \"kubernetes.io/projected/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-kube-api-access-m8867\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.992446 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-var-run-ovn\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.992464 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-ovn-controller-tls-certs\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.992479 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-combined-ca-bundle\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.992507 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-var-run\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.995772 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rrqqp"] Jan 26 12:57:35 crc kubenswrapper[4881]: I0126 12:57:35.997281 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.036995 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rrqqp"] Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.093773 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-scripts\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.093818 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-var-run\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.093840 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8867\" (UniqueName: \"kubernetes.io/projected/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-kube-api-access-m8867\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.093855 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-var-run-ovn\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.093876 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-ovn-controller-tls-certs\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.093891 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-combined-ca-bundle\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.093912 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-etc-ovs\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.093933 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-var-run\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.093950 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-var-log\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.093993 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-var-lib\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.094012 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b42b\" (UniqueName: \"kubernetes.io/projected/31e8c456-b53d-456e-a8d1-69f26e0602ad-kube-api-access-4b42b\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.094032 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31e8c456-b53d-456e-a8d1-69f26e0602ad-scripts\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.094067 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-var-log-ovn\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.094526 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-var-log-ovn\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.095066 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-var-run-ovn\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.095849 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-var-run\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.096388 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-scripts\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.099742 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-combined-ca-bundle\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.100441 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-ovn-controller-tls-certs\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.110270 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8867\" (UniqueName: \"kubernetes.io/projected/e52cbcc1-521d-4a7d-98a6-50ab70a2f82f-kube-api-access-m8867\") pod \"ovn-controller-vs5xn\" (UID: \"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f\") " pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.195017 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-etc-ovs\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.195080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-var-log\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.195138 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-var-lib\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.195159 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b42b\" (UniqueName: \"kubernetes.io/projected/31e8c456-b53d-456e-a8d1-69f26e0602ad-kube-api-access-4b42b\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.195178 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31e8c456-b53d-456e-a8d1-69f26e0602ad-scripts\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.195244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-var-run\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.195341 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-var-run\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.195501 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-etc-ovs\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.195858 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-var-log\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.196771 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/31e8c456-b53d-456e-a8d1-69f26e0602ad-var-lib\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.198531 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31e8c456-b53d-456e-a8d1-69f26e0602ad-scripts\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.216229 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b42b\" (UniqueName: \"kubernetes.io/projected/31e8c456-b53d-456e-a8d1-69f26e0602ad-kube-api-access-4b42b\") pod \"ovn-controller-ovs-rrqqp\" (UID: \"31e8c456-b53d-456e-a8d1-69f26e0602ad\") " pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.263824 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vs5xn" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.337133 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.835463 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.837385 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.839864 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.840035 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.840264 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.840555 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.840833 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-mrrsv" Jan 26 12:57:36 crc kubenswrapper[4881]: I0126 12:57:36.845436 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.007704 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/406b36b2-d29f-4224-8bc5-9cfd6f057a48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.007803 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg94j\" (UniqueName: \"kubernetes.io/projected/406b36b2-d29f-4224-8bc5-9cfd6f057a48-kube-api-access-vg94j\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.007895 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/406b36b2-d29f-4224-8bc5-9cfd6f057a48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.007935 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/406b36b2-d29f-4224-8bc5-9cfd6f057a48-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.007983 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406b36b2-d29f-4224-8bc5-9cfd6f057a48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.008040 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/406b36b2-d29f-4224-8bc5-9cfd6f057a48-config\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.008078 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.008121 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/406b36b2-d29f-4224-8bc5-9cfd6f057a48-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.111824 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/406b36b2-d29f-4224-8bc5-9cfd6f057a48-config\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.111892 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.111938 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/406b36b2-d29f-4224-8bc5-9cfd6f057a48-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.112048 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/406b36b2-d29f-4224-8bc5-9cfd6f057a48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.112101 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg94j\" (UniqueName: \"kubernetes.io/projected/406b36b2-d29f-4224-8bc5-9cfd6f057a48-kube-api-access-vg94j\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.112179 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/406b36b2-d29f-4224-8bc5-9cfd6f057a48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.112662 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.113117 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/406b36b2-d29f-4224-8bc5-9cfd6f057a48-config\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.114119 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/406b36b2-d29f-4224-8bc5-9cfd6f057a48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.112211 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/406b36b2-d29f-4224-8bc5-9cfd6f057a48-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.115686 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406b36b2-d29f-4224-8bc5-9cfd6f057a48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.117663 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/406b36b2-d29f-4224-8bc5-9cfd6f057a48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.122188 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/406b36b2-d29f-4224-8bc5-9cfd6f057a48-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.123545 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/406b36b2-d29f-4224-8bc5-9cfd6f057a48-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.128505 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406b36b2-d29f-4224-8bc5-9cfd6f057a48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.129223 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg94j\" (UniqueName: \"kubernetes.io/projected/406b36b2-d29f-4224-8bc5-9cfd6f057a48-kube-api-access-vg94j\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.149341 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"406b36b2-d29f-4224-8bc5-9cfd6f057a48\") " pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:37 crc kubenswrapper[4881]: I0126 12:57:37.171030 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.183446 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.187705 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.197102 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.197729 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.197777 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.198149 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2c8lw" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.219389 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.280128 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.280191 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.280251 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.280292 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.280333 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.280361 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkw75\" (UniqueName: \"kubernetes.io/projected/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-kube-api-access-mkw75\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.280381 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.280427 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-config\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.381921 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.382018 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkw75\" (UniqueName: \"kubernetes.io/projected/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-kube-api-access-mkw75\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.382053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.382120 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-config\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.382175 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.382219 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.382811 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.382995 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-config\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.383637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.383677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.383817 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.390003 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.398460 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.398982 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.399904 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.404599 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkw75\" (UniqueName: \"kubernetes.io/projected/a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7-kube-api-access-mkw75\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.404642 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:40 crc kubenswrapper[4881]: I0126 12:57:40.512024 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 12:57:41 crc kubenswrapper[4881]: E0126 12:57:41.033621 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 12:57:41 crc kubenswrapper[4881]: E0126 12:57:41.034060 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 12:57:41 crc kubenswrapper[4881]: E0126 12:57:41.034232 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lj6w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6dd95798b9-bdwrb_openstack(f5777be6-3c36-45f2-9caa-eb1ad9da50ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:57:41 crc kubenswrapper[4881]: E0126 12:57:41.035418 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" podUID="f5777be6-3c36-45f2-9caa-eb1ad9da50ce" Jan 26 12:57:41 crc kubenswrapper[4881]: E0126 12:57:41.084725 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 12:57:41 crc kubenswrapper[4881]: E0126 12:57:41.084776 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 12:57:41 crc kubenswrapper[4881]: E0126 12:57:41.084894 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wswsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-9cd4f5bf5-8zm5z_openstack(a3bc3df9-3d47-4305-99fe-494c5533b700): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:57:41 crc kubenswrapper[4881]: E0126 12:57:41.086065 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" podUID="a3bc3df9-3d47-4305-99fe-494c5533b700" Jan 26 12:57:41 crc kubenswrapper[4881]: I0126 12:57:41.532062 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 12:57:41 crc kubenswrapper[4881]: W0126 12:57:41.537065 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab9a358b_8713_4790_a9c4_97b89efcc88f.slice/crio-ae495a302a05ec27963cbf5ee9da3e34a816c6fa466773120fdead70443c4a0e WatchSource:0}: Error finding container ae495a302a05ec27963cbf5ee9da3e34a816c6fa466773120fdead70443c4a0e: Status 404 returned error can't find the container with id ae495a302a05ec27963cbf5ee9da3e34a816c6fa466773120fdead70443c4a0e Jan 26 12:57:41 crc kubenswrapper[4881]: I0126 12:57:41.587965 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 12:57:41 crc kubenswrapper[4881]: I0126 12:57:41.614728 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"ab9a358b-8713-4790-a9c4-97b89efcc88f","Type":"ContainerStarted","Data":"ae495a302a05ec27963cbf5ee9da3e34a816c6fa466773120fdead70443c4a0e"} Jan 26 12:57:41 crc kubenswrapper[4881]: W0126 12:57:41.644947 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32ed51d8_b401_412f_925e_0cff27777e55.slice/crio-ec021634450a97459e8540a1ce3117705265a364eea1bbeafbdef27cfc07edd9 WatchSource:0}: Error finding container ec021634450a97459e8540a1ce3117705265a364eea1bbeafbdef27cfc07edd9: Status 404 returned error can't find the container with id ec021634450a97459e8540a1ce3117705265a364eea1bbeafbdef27cfc07edd9 Jan 26 12:57:41 crc kubenswrapper[4881]: I0126 12:57:41.730633 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-vmx2w"] Jan 26 12:57:41 crc kubenswrapper[4881]: W0126 12:57:41.735703 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98f227bf_5140_4aac_b539_58c2a1fca65d.slice/crio-b947f3406aa6df04364523eb6ed01a022988da99906549ce1fe5d25942fb407e WatchSource:0}: Error finding container b947f3406aa6df04364523eb6ed01a022988da99906549ce1fe5d25942fb407e: Status 404 returned error can't find the container with id b947f3406aa6df04364523eb6ed01a022988da99906549ce1fe5d25942fb407e Jan 26 12:57:41 crc kubenswrapper[4881]: I0126 12:57:41.738942 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 12:57:41 crc kubenswrapper[4881]: I0126 12:57:41.744926 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57467f675c-tz7zs"] Jan 26 12:57:41 crc kubenswrapper[4881]: W0126 12:57:41.745823 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda455dd78_e351_449c_903a_5c0e0c50faf5.slice/crio-2b1c76e4511c2958052ff67ec9c58e80aa17b78b3af3d374cfe24022bd108fcf WatchSource:0}: Error finding container 2b1c76e4511c2958052ff67ec9c58e80aa17b78b3af3d374cfe24022bd108fcf: Status 404 returned error can't find the container with id 2b1c76e4511c2958052ff67ec9c58e80aa17b78b3af3d374cfe24022bd108fcf Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.185916 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.192612 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.195340 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.195869 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:42 crc kubenswrapper[4881]: W0126 12:57:42.199761 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95617a83_815e_4e5d_9b7e_4d3bec591ed8.slice/crio-2a31754e82097d923b0f1f774b89d97006f210d9c80c1bd9ed829822497774af WatchSource:0}: Error finding container 2a31754e82097d923b0f1f774b89d97006f210d9c80c1bd9ed829822497774af: Status 404 returned error can't find the container with id 2a31754e82097d923b0f1f774b89d97006f210d9c80c1bd9ed829822497774af Jan 26 12:57:42 crc kubenswrapper[4881]: W0126 12:57:42.202360 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8b6753b_d929_47d6_84ec_b72094efad83.slice/crio-ae6bdda8f09870dea25415a063500167cd9d963ba8e89bb963455053fc419096 WatchSource:0}: Error finding container ae6bdda8f09870dea25415a063500167cd9d963ba8e89bb963455053fc419096: Status 404 returned error can't find the container with id ae6bdda8f09870dea25415a063500167cd9d963ba8e89bb963455053fc419096 Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.202978 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vs5xn"] Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.228902 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-config\") pod \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.228975 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wswsg\" (UniqueName: \"kubernetes.io/projected/a3bc3df9-3d47-4305-99fe-494c5533b700-kube-api-access-wswsg\") pod \"a3bc3df9-3d47-4305-99fe-494c5533b700\" (UID: \"a3bc3df9-3d47-4305-99fe-494c5533b700\") " Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.229051 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-dns-svc\") pod \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.229077 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bc3df9-3d47-4305-99fe-494c5533b700-config\") pod \"a3bc3df9-3d47-4305-99fe-494c5533b700\" (UID: \"a3bc3df9-3d47-4305-99fe-494c5533b700\") " Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.229138 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj6w2\" (UniqueName: \"kubernetes.io/projected/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-kube-api-access-lj6w2\") pod \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\" (UID: \"f5777be6-3c36-45f2-9caa-eb1ad9da50ce\") " Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.229379 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-config" (OuterVolumeSpecName: "config") pod "f5777be6-3c36-45f2-9caa-eb1ad9da50ce" (UID: "f5777be6-3c36-45f2-9caa-eb1ad9da50ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.229834 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3bc3df9-3d47-4305-99fe-494c5533b700-config" (OuterVolumeSpecName: "config") pod "a3bc3df9-3d47-4305-99fe-494c5533b700" (UID: "a3bc3df9-3d47-4305-99fe-494c5533b700"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.231021 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5777be6-3c36-45f2-9caa-eb1ad9da50ce" (UID: "f5777be6-3c36-45f2-9caa-eb1ad9da50ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.234724 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3bc3df9-3d47-4305-99fe-494c5533b700-kube-api-access-wswsg" (OuterVolumeSpecName: "kube-api-access-wswsg") pod "a3bc3df9-3d47-4305-99fe-494c5533b700" (UID: "a3bc3df9-3d47-4305-99fe-494c5533b700"). InnerVolumeSpecName "kube-api-access-wswsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.238314 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-kube-api-access-lj6w2" (OuterVolumeSpecName: "kube-api-access-lj6w2") pod "f5777be6-3c36-45f2-9caa-eb1ad9da50ce" (UID: "f5777be6-3c36-45f2-9caa-eb1ad9da50ce"). InnerVolumeSpecName "kube-api-access-lj6w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.308691 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 12:57:42 crc kubenswrapper[4881]: W0126 12:57:42.314212 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb687a6e_7e1f_4697_8ab1_88ad03dd2951.slice/crio-bbc2b54d76f963912cc62f1957efe0cfca240f59a2c4c01117f97e433c1741f3 WatchSource:0}: Error finding container bbc2b54d76f963912cc62f1957efe0cfca240f59a2c4c01117f97e433c1741f3: Status 404 returned error can't find the container with id bbc2b54d76f963912cc62f1957efe0cfca240f59a2c4c01117f97e433c1741f3 Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.315488 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.336540 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.336564 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wswsg\" (UniqueName: \"kubernetes.io/projected/a3bc3df9-3d47-4305-99fe-494c5533b700-kube-api-access-wswsg\") on node \"crc\" DevicePath \"\"" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.336573 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.336582 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bc3df9-3d47-4305-99fe-494c5533b700-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.336592 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj6w2\" (UniqueName: \"kubernetes.io/projected/f5777be6-3c36-45f2-9caa-eb1ad9da50ce-kube-api-access-lj6w2\") on node \"crc\" DevicePath \"\"" Jan 26 12:57:42 crc kubenswrapper[4881]: W0126 12:57:42.385032 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda01decda_1236_44fa_a384_7fdedc2cc279.slice/crio-c3e7d170ac0952117e92d6c90bbeda0b70098864bd9b7a788583fd9ca2a9fd5c WatchSource:0}: Error finding container c3e7d170ac0952117e92d6c90bbeda0b70098864bd9b7a788583fd9ca2a9fd5c: Status 404 returned error can't find the container with id c3e7d170ac0952117e92d6c90bbeda0b70098864bd9b7a788583fd9ca2a9fd5c Jan 26 12:57:42 crc kubenswrapper[4881]: W0126 12:57:42.386355 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1067dd91_d79f_4165_8c6e_e3309dff7d26.slice/crio-22201f66f70578e050d3fee07513ec1a7bbf8de49212e92d0e86c3adfbd3a6a2 WatchSource:0}: Error finding container 22201f66f70578e050d3fee07513ec1a7bbf8de49212e92d0e86c3adfbd3a6a2: Status 404 returned error can't find the container with id 22201f66f70578e050d3fee07513ec1a7bbf8de49212e92d0e86c3adfbd3a6a2 Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.386556 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-t8gtc"] Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.396599 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.469079 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.533768 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 12:57:42 crc kubenswrapper[4881]: W0126 12:57:42.548497 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod406b36b2_d29f_4224_8bc5_9cfd6f057a48.slice/crio-05f6ad1dce766e12f55e88d7fd2089c06b7d6f7aee3787f09087c22f7a1a0937 WatchSource:0}: Error finding container 05f6ad1dce766e12f55e88d7fd2089c06b7d6f7aee3787f09087c22f7a1a0937: Status 404 returned error can't find the container with id 05f6ad1dce766e12f55e88d7fd2089c06b7d6f7aee3787f09087c22f7a1a0937 Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.625960 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" event={"ID":"a01decda-1236-44fa-a384-7fdedc2cc279","Type":"ContainerStarted","Data":"c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.626002 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" event={"ID":"a01decda-1236-44fa-a384-7fdedc2cc279","Type":"ContainerStarted","Data":"c3e7d170ac0952117e92d6c90bbeda0b70098864bd9b7a788583fd9ca2a9fd5c"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.628084 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" event={"ID":"f5777be6-3c36-45f2-9caa-eb1ad9da50ce","Type":"ContainerDied","Data":"2b886e3a967bedaba982f8b5500ac0d1e2935f44972affea4b10827f3e60c158"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.628096 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd95798b9-bdwrb" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.630594 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7","Type":"ContainerStarted","Data":"53a02bd24a3e51ee2d2e661b9beb4f7790d118ccc984a38df3acd0cf55f5576f"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.633132 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8b6753b-d929-47d6-84ec-b72094efad83","Type":"ContainerStarted","Data":"ae6bdda8f09870dea25415a063500167cd9d963ba8e89bb963455053fc419096"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.634988 4881 generic.go:334] "Generic (PLEG): container finished" podID="fd528ad5-ee88-4a39-b948-6364fa84fbe9" containerID="edb6c8d58601dc83f3eaacf79c42081cd4a110b07a85393d43bae09600343456" exitCode=0 Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.635063 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" event={"ID":"fd528ad5-ee88-4a39-b948-6364fa84fbe9","Type":"ContainerDied","Data":"edb6c8d58601dc83f3eaacf79c42081cd4a110b07a85393d43bae09600343456"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.635082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" event={"ID":"fd528ad5-ee88-4a39-b948-6364fa84fbe9","Type":"ContainerStarted","Data":"06a30068d8d2b70752deffa4afa6fabc01b0749dca5f49e7b2c4f92b9ef9c834"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.639291 4881 generic.go:334] "Generic (PLEG): container finished" podID="98f227bf-5140-4aac-b539-58c2a1fca65d" containerID="40b639f5c47b5aece4d3e1b4d1c5cba9861f073abe9149c85799d5a79c9035b3" exitCode=0 Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.639536 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" event={"ID":"98f227bf-5140-4aac-b539-58c2a1fca65d","Type":"ContainerDied","Data":"40b639f5c47b5aece4d3e1b4d1c5cba9861f073abe9149c85799d5a79c9035b3"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.639560 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" event={"ID":"98f227bf-5140-4aac-b539-58c2a1fca65d","Type":"ContainerStarted","Data":"b947f3406aa6df04364523eb6ed01a022988da99906549ce1fe5d25942fb407e"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.648467 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb687a6e-7e1f-4697-8ab1-88ad03dd2951","Type":"ContainerStarted","Data":"bbc2b54d76f963912cc62f1957efe0cfca240f59a2c4c01117f97e433c1741f3"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.650767 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a455dd78-e351-449c-903a-5c0e0c50faf5","Type":"ContainerStarted","Data":"2b1c76e4511c2958052ff67ec9c58e80aa17b78b3af3d374cfe24022bd108fcf"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.654420 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32ed51d8-b401-412f-925e-0cff27777e55","Type":"ContainerStarted","Data":"ec021634450a97459e8540a1ce3117705265a364eea1bbeafbdef27cfc07edd9"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.656219 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vs5xn" event={"ID":"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f","Type":"ContainerStarted","Data":"99b920f0d22f9c9ed97340ec34ba8e2481a6247231780d5c9decc52663d07503"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.658394 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"406b36b2-d29f-4224-8bc5-9cfd6f057a48","Type":"ContainerStarted","Data":"05f6ad1dce766e12f55e88d7fd2089c06b7d6f7aee3787f09087c22f7a1a0937"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.665467 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a6421b4d-2505-47c4-899d-7f7bd2113cf8","Type":"ContainerStarted","Data":"1443be5571ee024da78f46506894ed39b8b1c235f8e0f9c6290ba05bd8504a1d"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.669659 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95617a83-815e-4e5d-9b7e-4d3bec591ed8","Type":"ContainerStarted","Data":"2a31754e82097d923b0f1f774b89d97006f210d9c80c1bd9ed829822497774af"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.671056 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1067dd91-d79f-4165-8c6e-e3309dff7d26","Type":"ContainerStarted","Data":"22201f66f70578e050d3fee07513ec1a7bbf8de49212e92d0e86c3adfbd3a6a2"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.675376 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" event={"ID":"a3bc3df9-3d47-4305-99fe-494c5533b700","Type":"ContainerDied","Data":"155ac39d629b63a77649bb5543574ee605bbd67f1e1987dce6a3b5f1b6b810af"} Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.675458 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd4f5bf5-8zm5z" Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.715073 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-bdwrb"] Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.729083 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-bdwrb"] Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.768537 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-8zm5z"] Jan 26 12:57:42 crc kubenswrapper[4881]: I0126 12:57:42.776025 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-8zm5z"] Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.040614 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rrqqp"] Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.572940 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.687982 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" event={"ID":"98f227bf-5140-4aac-b539-58c2a1fca65d","Type":"ContainerDied","Data":"b947f3406aa6df04364523eb6ed01a022988da99906549ce1fe5d25942fb407e"} Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.688013 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64594fd94f-vmx2w" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.688045 4881 scope.go:117] "RemoveContainer" containerID="40b639f5c47b5aece4d3e1b4d1c5cba9861f073abe9149c85799d5a79c9035b3" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.695095 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" event={"ID":"fd528ad5-ee88-4a39-b948-6364fa84fbe9","Type":"ContainerStarted","Data":"07cb88ca2367e29027f94d6cf970fce114174128ff7c0c4fcf90d7cae8a4c0e0"} Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.695186 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.698896 4881 generic.go:334] "Generic (PLEG): container finished" podID="a01decda-1236-44fa-a384-7fdedc2cc279" containerID="c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784" exitCode=0 Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.698940 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" event={"ID":"a01decda-1236-44fa-a384-7fdedc2cc279","Type":"ContainerDied","Data":"c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784"} Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.698967 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" event={"ID":"a01decda-1236-44fa-a384-7fdedc2cc279","Type":"ContainerStarted","Data":"6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9"} Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.699056 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.715053 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" podStartSLOduration=17.637706389 podStartE2EDuration="17.715037315s" podCreationTimestamp="2026-01-26 12:57:26 +0000 UTC" firstStartedPulling="2026-01-26 12:57:41.743585426 +0000 UTC m=+1334.222895452" lastFinishedPulling="2026-01-26 12:57:41.820916352 +0000 UTC m=+1334.300226378" observedRunningTime="2026-01-26 12:57:43.707207424 +0000 UTC m=+1336.186517450" watchObservedRunningTime="2026-01-26 12:57:43.715037315 +0000 UTC m=+1336.194347341" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.724337 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" podStartSLOduration=18.724319802 podStartE2EDuration="18.724319802s" podCreationTimestamp="2026-01-26 12:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:57:43.723525192 +0000 UTC m=+1336.202835218" watchObservedRunningTime="2026-01-26 12:57:43.724319802 +0000 UTC m=+1336.203629818" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.772660 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-config\") pod \"98f227bf-5140-4aac-b539-58c2a1fca65d\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.772776 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-dns-svc\") pod \"98f227bf-5140-4aac-b539-58c2a1fca65d\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.772847 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2czwj\" (UniqueName: \"kubernetes.io/projected/98f227bf-5140-4aac-b539-58c2a1fca65d-kube-api-access-2czwj\") pod \"98f227bf-5140-4aac-b539-58c2a1fca65d\" (UID: \"98f227bf-5140-4aac-b539-58c2a1fca65d\") " Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.789576 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f227bf-5140-4aac-b539-58c2a1fca65d-kube-api-access-2czwj" (OuterVolumeSpecName: "kube-api-access-2czwj") pod "98f227bf-5140-4aac-b539-58c2a1fca65d" (UID: "98f227bf-5140-4aac-b539-58c2a1fca65d"). InnerVolumeSpecName "kube-api-access-2czwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.791603 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-config" (OuterVolumeSpecName: "config") pod "98f227bf-5140-4aac-b539-58c2a1fca65d" (UID: "98f227bf-5140-4aac-b539-58c2a1fca65d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.792671 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98f227bf-5140-4aac-b539-58c2a1fca65d" (UID: "98f227bf-5140-4aac-b539-58c2a1fca65d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.875325 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.875361 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98f227bf-5140-4aac-b539-58c2a1fca65d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 12:57:43 crc kubenswrapper[4881]: I0126 12:57:43.875372 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2czwj\" (UniqueName: \"kubernetes.io/projected/98f227bf-5140-4aac-b539-58c2a1fca65d-kube-api-access-2czwj\") on node \"crc\" DevicePath \"\"" Jan 26 12:57:44 crc kubenswrapper[4881]: I0126 12:57:44.036073 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-vmx2w"] Jan 26 12:57:44 crc kubenswrapper[4881]: I0126 12:57:44.037033 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-vmx2w"] Jan 26 12:57:44 crc kubenswrapper[4881]: I0126 12:57:44.094252 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f227bf-5140-4aac-b539-58c2a1fca65d" path="/var/lib/kubelet/pods/98f227bf-5140-4aac-b539-58c2a1fca65d/volumes" Jan 26 12:57:44 crc kubenswrapper[4881]: I0126 12:57:44.095299 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3bc3df9-3d47-4305-99fe-494c5533b700" path="/var/lib/kubelet/pods/a3bc3df9-3d47-4305-99fe-494c5533b700/volumes" Jan 26 12:57:44 crc kubenswrapper[4881]: I0126 12:57:44.095644 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5777be6-3c36-45f2-9caa-eb1ad9da50ce" path="/var/lib/kubelet/pods/f5777be6-3c36-45f2-9caa-eb1ad9da50ce/volumes" Jan 26 12:57:45 crc kubenswrapper[4881]: W0126 12:57:45.264658 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31e8c456_b53d_456e_a8d1_69f26e0602ad.slice/crio-384e3f9e37919b3afd554e044aef0560e93582d34472f25c3d722769a5c13a61 WatchSource:0}: Error finding container 384e3f9e37919b3afd554e044aef0560e93582d34472f25c3d722769a5c13a61: Status 404 returned error can't find the container with id 384e3f9e37919b3afd554e044aef0560e93582d34472f25c3d722769a5c13a61 Jan 26 12:57:45 crc kubenswrapper[4881]: I0126 12:57:45.717202 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rrqqp" event={"ID":"31e8c456-b53d-456e-a8d1-69f26e0602ad","Type":"ContainerStarted","Data":"384e3f9e37919b3afd554e044aef0560e93582d34472f25c3d722769a5c13a61"} Jan 26 12:57:51 crc kubenswrapper[4881]: I0126 12:57:51.063877 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:57:51 crc kubenswrapper[4881]: I0126 12:57:51.376607 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:57:51 crc kubenswrapper[4881]: I0126 12:57:51.446017 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-t8gtc"] Jan 26 12:57:51 crc kubenswrapper[4881]: I0126 12:57:51.766334 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" podUID="a01decda-1236-44fa-a384-7fdedc2cc279" containerName="dnsmasq-dns" containerID="cri-o://6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9" gracePeriod=10 Jan 26 12:57:56 crc kubenswrapper[4881]: I0126 12:57:56.063559 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" podUID="a01decda-1236-44fa-a384-7fdedc2cc279" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.101:5353: connect: connection refused" Jan 26 12:58:01 crc kubenswrapper[4881]: E0126 12:58:01.039820 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Jan 26 12:58:01 crc kubenswrapper[4881]: E0126 12:58:01.040800 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5djds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(1067dd91-d79f-4165-8c6e-e3309dff7d26): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:58:01 crc kubenswrapper[4881]: E0126 12:58:01.041926 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.063507 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" podUID="a01decda-1236-44fa-a384-7fdedc2cc279" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.101:5353: connect: connection refused" Jan 26 12:58:01 crc kubenswrapper[4881]: E0126 12:58:01.250011 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Jan 26 12:58:01 crc kubenswrapper[4881]: E0126 12:58:01.250062 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Jan 26 12:58:01 crc kubenswrapper[4881]: E0126 12:58:01.250192 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.23:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n688h54dh559hb8h665h579h56fh65bh5cbh85h568h548h57bhfbh54ch5cfh5c5h5b6h5f8h7ch56fh6bh666h59ch5f4hf8h59fh685h64h598h587h5dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8867,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-vs5xn_openstack(e52cbcc1-521d-4a7d-98a6-50ab70a2f82f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:58:01 crc kubenswrapper[4881]: E0126 12:58:01.251446 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-vs5xn" podUID="e52cbcc1-521d-4a7d-98a6-50ab70a2f82f" Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.700772 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.815800 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-config\") pod \"a01decda-1236-44fa-a384-7fdedc2cc279\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.815976 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-dns-svc\") pod \"a01decda-1236-44fa-a384-7fdedc2cc279\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.816039 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47zmx\" (UniqueName: \"kubernetes.io/projected/a01decda-1236-44fa-a384-7fdedc2cc279-kube-api-access-47zmx\") pod \"a01decda-1236-44fa-a384-7fdedc2cc279\" (UID: \"a01decda-1236-44fa-a384-7fdedc2cc279\") " Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.821665 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01decda-1236-44fa-a384-7fdedc2cc279-kube-api-access-47zmx" (OuterVolumeSpecName: "kube-api-access-47zmx") pod "a01decda-1236-44fa-a384-7fdedc2cc279" (UID: "a01decda-1236-44fa-a384-7fdedc2cc279"). InnerVolumeSpecName "kube-api-access-47zmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.866666 4881 generic.go:334] "Generic (PLEG): container finished" podID="a01decda-1236-44fa-a384-7fdedc2cc279" containerID="6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9" exitCode=0 Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.866722 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.866754 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" event={"ID":"a01decda-1236-44fa-a384-7fdedc2cc279","Type":"ContainerDied","Data":"6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9"} Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.866857 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-t8gtc" event={"ID":"a01decda-1236-44fa-a384-7fdedc2cc279","Type":"ContainerDied","Data":"c3e7d170ac0952117e92d6c90bbeda0b70098864bd9b7a788583fd9ca2a9fd5c"} Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.866899 4881 scope.go:117] "RemoveContainer" containerID="6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9" Jan 26 12:58:01 crc kubenswrapper[4881]: E0126 12:58:01.869086 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" Jan 26 12:58:01 crc kubenswrapper[4881]: E0126 12:58:01.873741 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest\\\"\"" pod="openstack/ovn-controller-vs5xn" podUID="e52cbcc1-521d-4a7d-98a6-50ab70a2f82f" Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.875347 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a01decda-1236-44fa-a384-7fdedc2cc279" (UID: "a01decda-1236-44fa-a384-7fdedc2cc279"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.889101 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-config" (OuterVolumeSpecName: "config") pod "a01decda-1236-44fa-a384-7fdedc2cc279" (UID: "a01decda-1236-44fa-a384-7fdedc2cc279"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.917478 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.917537 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47zmx\" (UniqueName: \"kubernetes.io/projected/a01decda-1236-44fa-a384-7fdedc2cc279-kube-api-access-47zmx\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:01 crc kubenswrapper[4881]: I0126 12:58:01.917554 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01decda-1236-44fa-a384-7fdedc2cc279-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.189967 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-t8gtc"] Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.195446 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-t8gtc"] Jan 26 12:58:02 crc kubenswrapper[4881]: E0126 12:58:02.323634 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 12:58:02 crc kubenswrapper[4881]: E0126 12:58:02.323684 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 12:58:02 crc kubenswrapper[4881]: E0126 12:58:02.323813 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzqq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(95617a83-815e-4e5d-9b7e-4d3bec591ed8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:58:02 crc kubenswrapper[4881]: E0126 12:58:02.324920 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="95617a83-815e-4e5d-9b7e-4d3bec591ed8" Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.344469 4881 scope.go:117] "RemoveContainer" containerID="c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784" Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.453164 4881 scope.go:117] "RemoveContainer" containerID="6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9" Jan 26 12:58:02 crc kubenswrapper[4881]: E0126 12:58:02.454155 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9\": container with ID starting with 6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9 not found: ID does not exist" containerID="6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9" Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.454195 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9"} err="failed to get container status \"6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9\": rpc error: code = NotFound desc = could not find container \"6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9\": container with ID starting with 6d16ab4dda4b11c446d31515c3c30d2b0eff668a46797bca053da847531133a9 not found: ID does not exist" Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.454219 4881 scope.go:117] "RemoveContainer" containerID="c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784" Jan 26 12:58:02 crc kubenswrapper[4881]: E0126 12:58:02.454690 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784\": container with ID starting with c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784 not found: ID does not exist" containerID="c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784" Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.454716 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784"} err="failed to get container status \"c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784\": rpc error: code = NotFound desc = could not find container \"c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784\": container with ID starting with c11206f94cea685bd9985059243760b6ad2a8a2472d7f31df2bde6151d7d1784 not found: ID does not exist" Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.874150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32ed51d8-b401-412f-925e-0cff27777e55","Type":"ContainerStarted","Data":"e7395d49b269172cd789bc29b0d1bc4fcec20047b6dadc2a15d77fcacaa0b57e"} Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.876108 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"406b36b2-d29f-4224-8bc5-9cfd6f057a48","Type":"ContainerStarted","Data":"2c9bef0e1342802fee6b50994f7a46fce792e9cbe2c7b1c7f0a08235a97ea81a"} Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.877915 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rrqqp" event={"ID":"31e8c456-b53d-456e-a8d1-69f26e0602ad","Type":"ContainerStarted","Data":"586b10eebf4ae10192587a0e79cad13887b0f5b8586f4d246a6edbc45d828211"} Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.881265 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a6421b4d-2505-47c4-899d-7f7bd2113cf8","Type":"ContainerStarted","Data":"f4235a328c925f8f1bb3a1012fb2465ecb4638964e0fb702bff8e5b0ba3787fa"} Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.881415 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.883439 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7","Type":"ContainerStarted","Data":"1eaa64b492d4a75a6faee433cc9929bb43694f03967a0975f013654b27d4e71c"} Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.885081 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8b6753b-d929-47d6-84ec-b72094efad83","Type":"ContainerStarted","Data":"68a77113267e27670ea3bcb4110c83d84a42d30acb9a05226ecbdb17308f6bdb"} Jan 26 12:58:02 crc kubenswrapper[4881]: E0126 12:58:02.886604 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="95617a83-815e-4e5d-9b7e-4d3bec591ed8" Jan 26 12:58:02 crc kubenswrapper[4881]: I0126 12:58:02.942991 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.017898652 podStartE2EDuration="32.942973053s" podCreationTimestamp="2026-01-26 12:57:30 +0000 UTC" firstStartedPulling="2026-01-26 12:57:42.335328418 +0000 UTC m=+1334.814638444" lastFinishedPulling="2026-01-26 12:58:01.260402829 +0000 UTC m=+1353.739712845" observedRunningTime="2026-01-26 12:58:02.935953682 +0000 UTC m=+1355.415263698" watchObservedRunningTime="2026-01-26 12:58:02.942973053 +0000 UTC m=+1355.422283089" Jan 26 12:58:03 crc kubenswrapper[4881]: I0126 12:58:03.894916 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a455dd78-e351-449c-903a-5c0e0c50faf5","Type":"ContainerStarted","Data":"7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5"} Jan 26 12:58:03 crc kubenswrapper[4881]: I0126 12:58:03.897793 4881 generic.go:334] "Generic (PLEG): container finished" podID="31e8c456-b53d-456e-a8d1-69f26e0602ad" containerID="586b10eebf4ae10192587a0e79cad13887b0f5b8586f4d246a6edbc45d828211" exitCode=0 Jan 26 12:58:03 crc kubenswrapper[4881]: I0126 12:58:03.897846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rrqqp" event={"ID":"31e8c456-b53d-456e-a8d1-69f26e0602ad","Type":"ContainerDied","Data":"586b10eebf4ae10192587a0e79cad13887b0f5b8586f4d246a6edbc45d828211"} Jan 26 12:58:03 crc kubenswrapper[4881]: I0126 12:58:03.900557 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb687a6e-7e1f-4697-8ab1-88ad03dd2951","Type":"ContainerStarted","Data":"b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9"} Jan 26 12:58:04 crc kubenswrapper[4881]: I0126 12:58:04.093833 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01decda-1236-44fa-a384-7fdedc2cc279" path="/var/lib/kubelet/pods/a01decda-1236-44fa-a384-7fdedc2cc279/volumes" Jan 26 12:58:04 crc kubenswrapper[4881]: I0126 12:58:04.916656 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"ab9a358b-8713-4790-a9c4-97b89efcc88f","Type":"ContainerStarted","Data":"e54d84c98843f6145cffc470d01c69913e0d35c69c2c3b4bf1d8ca15c6170cb3"} Jan 26 12:58:04 crc kubenswrapper[4881]: I0126 12:58:04.922868 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rrqqp" event={"ID":"31e8c456-b53d-456e-a8d1-69f26e0602ad","Type":"ContainerStarted","Data":"30a58ed38a286d742ce405cff4e78368f5b6b7e9dc4ae2c46891d088e78776cc"} Jan 26 12:58:06 crc kubenswrapper[4881]: I0126 12:58:06.942785 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"406b36b2-d29f-4224-8bc5-9cfd6f057a48","Type":"ContainerStarted","Data":"61d64bf09b7d2de47c59b25cae99f0c858024cff18738343cf46b7196b1027aa"} Jan 26 12:58:06 crc kubenswrapper[4881]: I0126 12:58:06.949982 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rrqqp" event={"ID":"31e8c456-b53d-456e-a8d1-69f26e0602ad","Type":"ContainerStarted","Data":"624d851ae85683c47c5d51f7be60e8e78532014e5d0b91c4ea50a09c2edfcbfe"} Jan 26 12:58:06 crc kubenswrapper[4881]: I0126 12:58:06.950051 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:58:06 crc kubenswrapper[4881]: I0126 12:58:06.950090 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:58:06 crc kubenswrapper[4881]: I0126 12:58:06.952959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7","Type":"ContainerStarted","Data":"66d50d66f4e5eafb0a509a19d8c8bc53495e947201a513bd9daeb8c2b860864a"} Jan 26 12:58:06 crc kubenswrapper[4881]: I0126 12:58:06.962511 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.405100817 podStartE2EDuration="31.962484289s" podCreationTimestamp="2026-01-26 12:57:35 +0000 UTC" firstStartedPulling="2026-01-26 12:57:42.552531745 +0000 UTC m=+1335.031841771" lastFinishedPulling="2026-01-26 12:58:06.109915197 +0000 UTC m=+1358.589225243" observedRunningTime="2026-01-26 12:58:06.962458208 +0000 UTC m=+1359.441768264" watchObservedRunningTime="2026-01-26 12:58:06.962484289 +0000 UTC m=+1359.441794355" Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:06.998699 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rrqqp" podStartSLOduration=15.652589334 podStartE2EDuration="31.998665931s" podCreationTimestamp="2026-01-26 12:57:35 +0000 UTC" firstStartedPulling="2026-01-26 12:57:45.268401317 +0000 UTC m=+1337.747711343" lastFinishedPulling="2026-01-26 12:58:01.614477914 +0000 UTC m=+1354.093787940" observedRunningTime="2026-01-26 12:58:06.991202619 +0000 UTC m=+1359.470512675" watchObservedRunningTime="2026-01-26 12:58:06.998665931 +0000 UTC m=+1359.477975967" Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.022913 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.358440379 podStartE2EDuration="28.022882952s" podCreationTimestamp="2026-01-26 12:57:39 +0000 UTC" firstStartedPulling="2026-01-26 12:57:42.463286358 +0000 UTC m=+1334.942596384" lastFinishedPulling="2026-01-26 12:58:06.127728911 +0000 UTC m=+1358.607038957" observedRunningTime="2026-01-26 12:58:07.014103418 +0000 UTC m=+1359.493413514" watchObservedRunningTime="2026-01-26 12:58:07.022882952 +0000 UTC m=+1359.502192988" Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.172210 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.172746 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.239703 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 12:58:07 crc kubenswrapper[4881]: E0126 12:58:07.503754 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32ed51d8_b401_412f_925e_0cff27777e55.slice/crio-e7395d49b269172cd789bc29b0d1bc4fcec20047b6dadc2a15d77fcacaa0b57e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32ed51d8_b401_412f_925e_0cff27777e55.slice/crio-conmon-e7395d49b269172cd789bc29b0d1bc4fcec20047b6dadc2a15d77fcacaa0b57e.scope\": RecentStats: unable to find data in memory cache]" Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.512180 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.568096 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.967817 4881 generic.go:334] "Generic (PLEG): container finished" podID="32ed51d8-b401-412f-925e-0cff27777e55" containerID="e7395d49b269172cd789bc29b0d1bc4fcec20047b6dadc2a15d77fcacaa0b57e" exitCode=0 Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.967891 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32ed51d8-b401-412f-925e-0cff27777e55","Type":"ContainerDied","Data":"e7395d49b269172cd789bc29b0d1bc4fcec20047b6dadc2a15d77fcacaa0b57e"} Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.970810 4881 generic.go:334] "Generic (PLEG): container finished" podID="c8b6753b-d929-47d6-84ec-b72094efad83" containerID="68a77113267e27670ea3bcb4110c83d84a42d30acb9a05226ecbdb17308f6bdb" exitCode=0 Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.971044 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8b6753b-d929-47d6-84ec-b72094efad83","Type":"ContainerDied","Data":"68a77113267e27670ea3bcb4110c83d84a42d30acb9a05226ecbdb17308f6bdb"} Jan 26 12:58:07 crc kubenswrapper[4881]: I0126 12:58:07.972661 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.061107 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.109986 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.325434 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-ld88t"] Jan 26 12:58:08 crc kubenswrapper[4881]: E0126 12:58:08.326044 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01decda-1236-44fa-a384-7fdedc2cc279" containerName="dnsmasq-dns" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.326055 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01decda-1236-44fa-a384-7fdedc2cc279" containerName="dnsmasq-dns" Jan 26 12:58:08 crc kubenswrapper[4881]: E0126 12:58:08.326090 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f227bf-5140-4aac-b539-58c2a1fca65d" containerName="init" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.326096 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f227bf-5140-4aac-b539-58c2a1fca65d" containerName="init" Jan 26 12:58:08 crc kubenswrapper[4881]: E0126 12:58:08.326108 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01decda-1236-44fa-a384-7fdedc2cc279" containerName="init" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.326114 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01decda-1236-44fa-a384-7fdedc2cc279" containerName="init" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.326261 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f227bf-5140-4aac-b539-58c2a1fca65d" containerName="init" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.326277 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01decda-1236-44fa-a384-7fdedc2cc279" containerName="dnsmasq-dns" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.327089 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.329725 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.331674 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-ld88t"] Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.445267 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-config\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.445329 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-dns-svc\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.445486 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgffd\" (UniqueName: \"kubernetes.io/projected/4d3587db-61c7-4a48-bf12-f3998c659635-kube-api-access-tgffd\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.445567 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-ovsdbserver-nb\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.534721 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jh6zr"] Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.535745 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.539195 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.546964 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-ovsdbserver-nb\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.547053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-config\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.547099 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-dns-svc\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.547139 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgffd\" (UniqueName: \"kubernetes.io/projected/4d3587db-61c7-4a48-bf12-f3998c659635-kube-api-access-tgffd\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.547934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-config\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.547942 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-ovsdbserver-nb\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.548000 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-dns-svc\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.551051 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jh6zr"] Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.598289 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgffd\" (UniqueName: \"kubernetes.io/projected/4d3587db-61c7-4a48-bf12-f3998c659635-kube-api-access-tgffd\") pod \"dnsmasq-dns-57b46657c9-ld88t\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.648382 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bbc286b3-4266-462b-b661-d072e9843683-ovn-rundir\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.648433 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bbc286b3-4266-462b-b661-d072e9843683-ovs-rundir\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.648490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc286b3-4266-462b-b661-d072e9843683-config\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.648722 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.648917 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrwcc\" (UniqueName: \"kubernetes.io/projected/bbc286b3-4266-462b-b661-d072e9843683-kube-api-access-xrwcc\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.648995 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc286b3-4266-462b-b661-d072e9843683-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.649012 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc286b3-4266-462b-b661-d072e9843683-combined-ca-bundle\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.658608 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-ld88t"] Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.668096 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.677504 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.680631 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-x6pfg"] Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.681700 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.681756 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.681782 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.681709 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-lsttg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.681972 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.684492 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.689875 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.699951 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-x6pfg"] Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.751288 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddc6k\" (UniqueName: \"kubernetes.io/projected/7c9ded6c-9f0e-46af-be8e-4c30033f32df-kube-api-access-ddc6k\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.751401 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10988d9-411e-42a7-82bf-8ed88569d801-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.751437 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bbc286b3-4266-462b-b661-d072e9843683-ovn-rundir\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.751457 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-dns-svc\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.751715 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bbc286b3-4266-462b-b661-d072e9843683-ovn-rundir\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.751491 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bbc286b3-4266-462b-b661-d072e9843683-ovs-rundir\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.751822 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-config\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.751846 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bbc286b3-4266-462b-b661-d072e9843683-ovs-rundir\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.751896 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10988d9-411e-42a7-82bf-8ed88569d801-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.752039 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-sb\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.752093 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10988d9-411e-42a7-82bf-8ed88569d801-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.752318 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc286b3-4266-462b-b661-d072e9843683-config\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.753023 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc286b3-4266-462b-b661-d072e9843683-config\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.752358 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a10988d9-411e-42a7-82bf-8ed88569d801-scripts\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.753184 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-nb\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.753338 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrwcc\" (UniqueName: \"kubernetes.io/projected/bbc286b3-4266-462b-b661-d072e9843683-kube-api-access-xrwcc\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.753362 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a10988d9-411e-42a7-82bf-8ed88569d801-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.753427 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gghxl\" (UniqueName: \"kubernetes.io/projected/a10988d9-411e-42a7-82bf-8ed88569d801-kube-api-access-gghxl\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.753477 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc286b3-4266-462b-b661-d072e9843683-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.753503 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc286b3-4266-462b-b661-d072e9843683-combined-ca-bundle\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.753535 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10988d9-411e-42a7-82bf-8ed88569d801-config\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.759180 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc286b3-4266-462b-b661-d072e9843683-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.760307 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc286b3-4266-462b-b661-d072e9843683-combined-ca-bundle\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.777215 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrwcc\" (UniqueName: \"kubernetes.io/projected/bbc286b3-4266-462b-b661-d072e9843683-kube-api-access-xrwcc\") pod \"ovn-controller-metrics-jh6zr\" (UID: \"bbc286b3-4266-462b-b661-d072e9843683\") " pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.853384 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jh6zr" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.854997 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a10988d9-411e-42a7-82bf-8ed88569d801-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855056 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gghxl\" (UniqueName: \"kubernetes.io/projected/a10988d9-411e-42a7-82bf-8ed88569d801-kube-api-access-gghxl\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855117 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10988d9-411e-42a7-82bf-8ed88569d801-config\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855142 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddc6k\" (UniqueName: \"kubernetes.io/projected/7c9ded6c-9f0e-46af-be8e-4c30033f32df-kube-api-access-ddc6k\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855189 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10988d9-411e-42a7-82bf-8ed88569d801-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855209 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-dns-svc\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855240 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-config\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855286 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10988d9-411e-42a7-82bf-8ed88569d801-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855301 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-sb\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855320 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10988d9-411e-42a7-82bf-8ed88569d801-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855380 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a10988d9-411e-42a7-82bf-8ed88569d801-scripts\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.855398 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-nb\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.856398 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-dns-svc\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.856582 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10988d9-411e-42a7-82bf-8ed88569d801-config\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.856632 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-nb\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.857042 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a10988d9-411e-42a7-82bf-8ed88569d801-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.857605 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-sb\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.858297 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a10988d9-411e-42a7-82bf-8ed88569d801-scripts\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.858898 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-config\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.864170 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10988d9-411e-42a7-82bf-8ed88569d801-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.864575 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10988d9-411e-42a7-82bf-8ed88569d801-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.866124 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10988d9-411e-42a7-82bf-8ed88569d801-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.872127 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddc6k\" (UniqueName: \"kubernetes.io/projected/7c9ded6c-9f0e-46af-be8e-4c30033f32df-kube-api-access-ddc6k\") pod \"dnsmasq-dns-5899d7d557-x6pfg\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.876423 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gghxl\" (UniqueName: \"kubernetes.io/projected/a10988d9-411e-42a7-82bf-8ed88569d801-kube-api-access-gghxl\") pod \"ovn-northd-0\" (UID: \"a10988d9-411e-42a7-82bf-8ed88569d801\") " pod="openstack/ovn-northd-0" Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.988147 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8b6753b-d929-47d6-84ec-b72094efad83","Type":"ContainerStarted","Data":"e9390648e594dde9543a53073b3399d9beb84344f2ad1747f4a5a8dda684bfbb"} Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.991994 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32ed51d8-b401-412f-925e-0cff27777e55","Type":"ContainerStarted","Data":"1bd09c61e11a2c398412b27f8155d650191f5075fee819f07ad9fad19c6b994f"} Jan 26 12:58:08 crc kubenswrapper[4881]: I0126 12:58:08.998435 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 12:58:09 crc kubenswrapper[4881]: I0126 12:58:09.018817 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:09 crc kubenswrapper[4881]: I0126 12:58:09.019848 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.609804424 podStartE2EDuration="41.019828212s" podCreationTimestamp="2026-01-26 12:57:28 +0000 UTC" firstStartedPulling="2026-01-26 12:57:42.20463345 +0000 UTC m=+1334.683943476" lastFinishedPulling="2026-01-26 12:58:01.614657218 +0000 UTC m=+1354.093967264" observedRunningTime="2026-01-26 12:58:09.018255954 +0000 UTC m=+1361.497566000" watchObservedRunningTime="2026-01-26 12:58:09.019828212 +0000 UTC m=+1361.499138238" Jan 26 12:58:09 crc kubenswrapper[4881]: I0126 12:58:09.062496 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.250857639 podStartE2EDuration="42.062479392s" podCreationTimestamp="2026-01-26 12:57:27 +0000 UTC" firstStartedPulling="2026-01-26 12:57:41.661187316 +0000 UTC m=+1334.140497342" lastFinishedPulling="2026-01-26 12:58:01.472809029 +0000 UTC m=+1353.952119095" observedRunningTime="2026-01-26 12:58:09.047400855 +0000 UTC m=+1361.526710881" watchObservedRunningTime="2026-01-26 12:58:09.062479392 +0000 UTC m=+1361.541789418" Jan 26 12:58:09 crc kubenswrapper[4881]: I0126 12:58:09.140495 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jh6zr"] Jan 26 12:58:09 crc kubenswrapper[4881]: W0126 12:58:09.143715 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc286b3_4266_462b_b661_d072e9843683.slice/crio-547250c6aa9ebe4272c8d9932da1ef3784aa37ffc0e3083f21272f0b5bb2994b WatchSource:0}: Error finding container 547250c6aa9ebe4272c8d9932da1ef3784aa37ffc0e3083f21272f0b5bb2994b: Status 404 returned error can't find the container with id 547250c6aa9ebe4272c8d9932da1ef3784aa37ffc0e3083f21272f0b5bb2994b Jan 26 12:58:09 crc kubenswrapper[4881]: I0126 12:58:09.158140 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-ld88t"] Jan 26 12:58:09 crc kubenswrapper[4881]: W0126 12:58:09.165407 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d3587db_61c7_4a48_bf12_f3998c659635.slice/crio-23bd0f092a2a25fc7e9ffa1962c5d3c93ff94e6bbe8d51b59f377224cf3abb38 WatchSource:0}: Error finding container 23bd0f092a2a25fc7e9ffa1962c5d3c93ff94e6bbe8d51b59f377224cf3abb38: Status 404 returned error can't find the container with id 23bd0f092a2a25fc7e9ffa1962c5d3c93ff94e6bbe8d51b59f377224cf3abb38 Jan 26 12:58:09 crc kubenswrapper[4881]: I0126 12:58:09.511410 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 12:58:09 crc kubenswrapper[4881]: W0126 12:58:09.518661 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda10988d9_411e_42a7_82bf_8ed88569d801.slice/crio-23b04f663228e6a79d0643c3aecaaf24b47ff47d1443c144d378cf710b2b138b WatchSource:0}: Error finding container 23b04f663228e6a79d0643c3aecaaf24b47ff47d1443c144d378cf710b2b138b: Status 404 returned error can't find the container with id 23b04f663228e6a79d0643c3aecaaf24b47ff47d1443c144d378cf710b2b138b Jan 26 12:58:09 crc kubenswrapper[4881]: I0126 12:58:09.577212 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-x6pfg"] Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.008005 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jh6zr" event={"ID":"bbc286b3-4266-462b-b661-d072e9843683","Type":"ContainerStarted","Data":"720ee8379ec4e1b5b4a38e178d0998e62c067fcb1413a55a3cb3e757aa5370e1"} Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.008053 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jh6zr" event={"ID":"bbc286b3-4266-462b-b661-d072e9843683","Type":"ContainerStarted","Data":"547250c6aa9ebe4272c8d9932da1ef3784aa37ffc0e3083f21272f0b5bb2994b"} Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.009939 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a10988d9-411e-42a7-82bf-8ed88569d801","Type":"ContainerStarted","Data":"23b04f663228e6a79d0643c3aecaaf24b47ff47d1443c144d378cf710b2b138b"} Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.012136 4881 generic.go:334] "Generic (PLEG): container finished" podID="4d3587db-61c7-4a48-bf12-f3998c659635" containerID="88eeb9cd12009f7f46d56335231165cd5a402aec2ec7dc1e7aecd65042b9fc81" exitCode=0 Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.012188 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b46657c9-ld88t" event={"ID":"4d3587db-61c7-4a48-bf12-f3998c659635","Type":"ContainerDied","Data":"88eeb9cd12009f7f46d56335231165cd5a402aec2ec7dc1e7aecd65042b9fc81"} Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.012208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b46657c9-ld88t" event={"ID":"4d3587db-61c7-4a48-bf12-f3998c659635","Type":"ContainerStarted","Data":"23bd0f092a2a25fc7e9ffa1962c5d3c93ff94e6bbe8d51b59f377224cf3abb38"} Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.013994 4881 generic.go:334] "Generic (PLEG): container finished" podID="7c9ded6c-9f0e-46af-be8e-4c30033f32df" containerID="05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80" exitCode=0 Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.014893 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" event={"ID":"7c9ded6c-9f0e-46af-be8e-4c30033f32df","Type":"ContainerDied","Data":"05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80"} Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.014924 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" event={"ID":"7c9ded6c-9f0e-46af-be8e-4c30033f32df","Type":"ContainerStarted","Data":"43274829feea394341f91cbe72454b4976258a7468b56ae6151dc4b277e37e92"} Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.030231 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jh6zr" podStartSLOduration=2.030212683 podStartE2EDuration="2.030212683s" podCreationTimestamp="2026-01-26 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:10.020569068 +0000 UTC m=+1362.499879104" watchObservedRunningTime="2026-01-26 12:58:10.030212683 +0000 UTC m=+1362.509522709" Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.262693 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.263605 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.356548 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.499846 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-ovsdbserver-nb\") pod \"4d3587db-61c7-4a48-bf12-f3998c659635\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.500005 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-dns-svc\") pod \"4d3587db-61c7-4a48-bf12-f3998c659635\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.500062 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-config\") pod \"4d3587db-61c7-4a48-bf12-f3998c659635\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.500093 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgffd\" (UniqueName: \"kubernetes.io/projected/4d3587db-61c7-4a48-bf12-f3998c659635-kube-api-access-tgffd\") pod \"4d3587db-61c7-4a48-bf12-f3998c659635\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.503106 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3587db-61c7-4a48-bf12-f3998c659635-kube-api-access-tgffd" (OuterVolumeSpecName: "kube-api-access-tgffd") pod "4d3587db-61c7-4a48-bf12-f3998c659635" (UID: "4d3587db-61c7-4a48-bf12-f3998c659635"). InnerVolumeSpecName "kube-api-access-tgffd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.518764 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-config" (OuterVolumeSpecName: "config") pod "4d3587db-61c7-4a48-bf12-f3998c659635" (UID: "4d3587db-61c7-4a48-bf12-f3998c659635"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:10 crc kubenswrapper[4881]: E0126 12:58:10.518938 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-dns-svc podName:4d3587db-61c7-4a48-bf12-f3998c659635 nodeName:}" failed. No retries permitted until 2026-01-26 12:58:11.018900721 +0000 UTC m=+1363.498210787 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-dns-svc") pod "4d3587db-61c7-4a48-bf12-f3998c659635" (UID: "4d3587db-61c7-4a48-bf12-f3998c659635") : error deleting /var/lib/kubelet/pods/4d3587db-61c7-4a48-bf12-f3998c659635/volume-subpaths: remove /var/lib/kubelet/pods/4d3587db-61c7-4a48-bf12-f3998c659635/volume-subpaths: no such file or directory Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.519120 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d3587db-61c7-4a48-bf12-f3998c659635" (UID: "4d3587db-61c7-4a48-bf12-f3998c659635"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.603309 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.603352 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.603370 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgffd\" (UniqueName: \"kubernetes.io/projected/4d3587db-61c7-4a48-bf12-f3998c659635-kube-api-access-tgffd\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:10 crc kubenswrapper[4881]: I0126 12:58:10.623721 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.026201 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a10988d9-411e-42a7-82bf-8ed88569d801","Type":"ContainerStarted","Data":"88151a556b0caf9b6fa4d1b72fea20b5a2660075715b5a4581903cc50da9257c"} Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.026247 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a10988d9-411e-42a7-82bf-8ed88569d801","Type":"ContainerStarted","Data":"4a7b2f2c879c05099bd37d3688bad76d0de824466d822d0fc1cd35f58cdba68c"} Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.026362 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.028235 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b46657c9-ld88t" Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.028270 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b46657c9-ld88t" event={"ID":"4d3587db-61c7-4a48-bf12-f3998c659635","Type":"ContainerDied","Data":"23bd0f092a2a25fc7e9ffa1962c5d3c93ff94e6bbe8d51b59f377224cf3abb38"} Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.028847 4881 scope.go:117] "RemoveContainer" containerID="88eeb9cd12009f7f46d56335231165cd5a402aec2ec7dc1e7aecd65042b9fc81" Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.033656 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" event={"ID":"7c9ded6c-9f0e-46af-be8e-4c30033f32df","Type":"ContainerStarted","Data":"72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798"} Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.065561 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.568828157 podStartE2EDuration="3.065539431s" podCreationTimestamp="2026-01-26 12:58:08 +0000 UTC" firstStartedPulling="2026-01-26 12:58:09.520941793 +0000 UTC m=+1362.000251819" lastFinishedPulling="2026-01-26 12:58:10.017653057 +0000 UTC m=+1362.496963093" observedRunningTime="2026-01-26 12:58:11.063179214 +0000 UTC m=+1363.542489250" watchObservedRunningTime="2026-01-26 12:58:11.065539431 +0000 UTC m=+1363.544849457" Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.113191 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-dns-svc\") pod \"4d3587db-61c7-4a48-bf12-f3998c659635\" (UID: \"4d3587db-61c7-4a48-bf12-f3998c659635\") " Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.122714 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d3587db-61c7-4a48-bf12-f3998c659635" (UID: "4d3587db-61c7-4a48-bf12-f3998c659635"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.162376 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" podStartSLOduration=3.162360103 podStartE2EDuration="3.162360103s" podCreationTimestamp="2026-01-26 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:11.154209514 +0000 UTC m=+1363.633519560" watchObservedRunningTime="2026-01-26 12:58:11.162360103 +0000 UTC m=+1363.641670129" Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.217826 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d3587db-61c7-4a48-bf12-f3998c659635-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.373015 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-ld88t"] Jan 26 12:58:11 crc kubenswrapper[4881]: I0126 12:58:11.385542 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-ld88t"] Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.066176 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.093805 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d3587db-61c7-4a48-bf12-f3998c659635" path="/var/lib/kubelet/pods/4d3587db-61c7-4a48-bf12-f3998c659635/volumes" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.573398 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-x6pfg"] Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.617373 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-z9nsc"] Jan 26 12:58:12 crc kubenswrapper[4881]: E0126 12:58:12.617725 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3587db-61c7-4a48-bf12-f3998c659635" containerName="init" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.617740 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3587db-61c7-4a48-bf12-f3998c659635" containerName="init" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.617886 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3587db-61c7-4a48-bf12-f3998c659635" containerName="init" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.623435 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.638940 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-z9nsc"] Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.745394 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-nb\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.745440 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-sb\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.745466 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-dns-svc\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.745574 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-config\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.745600 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqqq2\" (UniqueName: \"kubernetes.io/projected/908ed279-9514-43e0-a6a7-2ed24cfe34da-kube-api-access-gqqq2\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.847303 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-nb\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.847350 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-sb\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.847382 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-dns-svc\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.847459 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-config\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.847483 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqqq2\" (UniqueName: \"kubernetes.io/projected/908ed279-9514-43e0-a6a7-2ed24cfe34da-kube-api-access-gqqq2\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.848534 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-sb\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.848630 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-config\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.848678 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-dns-svc\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.848553 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-nb\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.869141 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqqq2\" (UniqueName: \"kubernetes.io/projected/908ed279-9514-43e0-a6a7-2ed24cfe34da-kube-api-access-gqqq2\") pod \"dnsmasq-dns-65cc6fcf45-z9nsc\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:12 crc kubenswrapper[4881]: I0126 12:58:12.940131 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.356172 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-z9nsc"] Jan 26 12:58:13 crc kubenswrapper[4881]: W0126 12:58:13.365986 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod908ed279_9514_43e0_a6a7_2ed24cfe34da.slice/crio-3f68b100a7f0627ae32320bbc6f06be54d2828fa90aba9612f3c3a26ad5692ee WatchSource:0}: Error finding container 3f68b100a7f0627ae32320bbc6f06be54d2828fa90aba9612f3c3a26ad5692ee: Status 404 returned error can't find the container with id 3f68b100a7f0627ae32320bbc6f06be54d2828fa90aba9612f3c3a26ad5692ee Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.756152 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.763010 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.770958 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.771830 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.772277 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.777197 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4p5cq" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.802071 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.876354 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.876621 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.876694 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e234f178-5499-441d-923d-26a5a7cbfe04-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.876835 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e234f178-5499-441d-923d-26a5a7cbfe04-cache\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.876905 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e234f178-5499-441d-923d-26a5a7cbfe04-lock\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.877010 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fs5r\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-kube-api-access-2fs5r\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.978466 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e234f178-5499-441d-923d-26a5a7cbfe04-lock\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.978597 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fs5r\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-kube-api-access-2fs5r\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.978635 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.978717 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.978747 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e234f178-5499-441d-923d-26a5a7cbfe04-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.978840 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e234f178-5499-441d-923d-26a5a7cbfe04-cache\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: E0126 12:58:13.979068 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 12:58:13 crc kubenswrapper[4881]: E0126 12:58:13.979150 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 12:58:13 crc kubenswrapper[4881]: E0126 12:58:13.979273 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift podName:e234f178-5499-441d-923d-26a5a7cbfe04 nodeName:}" failed. No retries permitted until 2026-01-26 12:58:14.47925347 +0000 UTC m=+1366.958563496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift") pod "swift-storage-0" (UID: "e234f178-5499-441d-923d-26a5a7cbfe04") : configmap "swift-ring-files" not found Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.979151 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.979664 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e234f178-5499-441d-923d-26a5a7cbfe04-cache\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.979808 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e234f178-5499-441d-923d-26a5a7cbfe04-lock\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:13 crc kubenswrapper[4881]: I0126 12:58:13.986763 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e234f178-5499-441d-923d-26a5a7cbfe04-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.002888 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fs5r\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-kube-api-access-2fs5r\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.019626 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.085083 4881 generic.go:334] "Generic (PLEG): container finished" podID="908ed279-9514-43e0-a6a7-2ed24cfe34da" containerID="60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18" exitCode=0 Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.085357 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" podUID="7c9ded6c-9f0e-46af-be8e-4c30033f32df" containerName="dnsmasq-dns" containerID="cri-o://72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798" gracePeriod=10 Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.097829 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vs5xn" event={"ID":"e52cbcc1-521d-4a7d-98a6-50ab70a2f82f","Type":"ContainerStarted","Data":"526ea8c5dd1872a4d21ff8e64e4519ac0e31dbcd120f8b4e0f468ae8d2187815"} Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.097875 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" event={"ID":"908ed279-9514-43e0-a6a7-2ed24cfe34da","Type":"ContainerDied","Data":"60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18"} Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.097892 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" event={"ID":"908ed279-9514-43e0-a6a7-2ed24cfe34da","Type":"ContainerStarted","Data":"3f68b100a7f0627ae32320bbc6f06be54d2828fa90aba9612f3c3a26ad5692ee"} Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.098550 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vs5xn" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.175612 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vs5xn" podStartSLOduration=8.238777529 podStartE2EDuration="39.175583178s" podCreationTimestamp="2026-01-26 12:57:35 +0000 UTC" firstStartedPulling="2026-01-26 12:57:42.214246634 +0000 UTC m=+1334.693556660" lastFinishedPulling="2026-01-26 12:58:13.151052273 +0000 UTC m=+1365.630362309" observedRunningTime="2026-01-26 12:58:14.159947257 +0000 UTC m=+1366.639257323" watchObservedRunningTime="2026-01-26 12:58:14.175583178 +0000 UTC m=+1366.654893244" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.487034 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.494958 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:14 crc kubenswrapper[4881]: E0126 12:58:14.495158 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 12:58:14 crc kubenswrapper[4881]: E0126 12:58:14.495179 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 12:58:14 crc kubenswrapper[4881]: E0126 12:58:14.495215 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift podName:e234f178-5499-441d-923d-26a5a7cbfe04 nodeName:}" failed. No retries permitted until 2026-01-26 12:58:15.495201493 +0000 UTC m=+1367.974511519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift") pod "swift-storage-0" (UID: "e234f178-5499-441d-923d-26a5a7cbfe04") : configmap "swift-ring-files" not found Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.598766 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-dns-svc\") pod \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.598812 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-config\") pod \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.598978 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddc6k\" (UniqueName: \"kubernetes.io/projected/7c9ded6c-9f0e-46af-be8e-4c30033f32df-kube-api-access-ddc6k\") pod \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.599029 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-sb\") pod \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.599048 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-nb\") pod \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\" (UID: \"7c9ded6c-9f0e-46af-be8e-4c30033f32df\") " Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.604426 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c9ded6c-9f0e-46af-be8e-4c30033f32df-kube-api-access-ddc6k" (OuterVolumeSpecName: "kube-api-access-ddc6k") pod "7c9ded6c-9f0e-46af-be8e-4c30033f32df" (UID: "7c9ded6c-9f0e-46af-be8e-4c30033f32df"). InnerVolumeSpecName "kube-api-access-ddc6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.646537 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-config" (OuterVolumeSpecName: "config") pod "7c9ded6c-9f0e-46af-be8e-4c30033f32df" (UID: "7c9ded6c-9f0e-46af-be8e-4c30033f32df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.652172 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c9ded6c-9f0e-46af-be8e-4c30033f32df" (UID: "7c9ded6c-9f0e-46af-be8e-4c30033f32df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.679801 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c9ded6c-9f0e-46af-be8e-4c30033f32df" (UID: "7c9ded6c-9f0e-46af-be8e-4c30033f32df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.682799 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c9ded6c-9f0e-46af-be8e-4c30033f32df" (UID: "7c9ded6c-9f0e-46af-be8e-4c30033f32df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.701125 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddc6k\" (UniqueName: \"kubernetes.io/projected/7c9ded6c-9f0e-46af-be8e-4c30033f32df-kube-api-access-ddc6k\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.701346 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.701443 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.701568 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:14 crc kubenswrapper[4881]: I0126 12:58:14.701654 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ded6c-9f0e-46af-be8e-4c30033f32df-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.092845 4881 generic.go:334] "Generic (PLEG): container finished" podID="7c9ded6c-9f0e-46af-be8e-4c30033f32df" containerID="72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798" exitCode=0 Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.092919 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" event={"ID":"7c9ded6c-9f0e-46af-be8e-4c30033f32df","Type":"ContainerDied","Data":"72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798"} Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.092947 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" event={"ID":"7c9ded6c-9f0e-46af-be8e-4c30033f32df","Type":"ContainerDied","Data":"43274829feea394341f91cbe72454b4976258a7468b56ae6151dc4b277e37e92"} Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.092964 4881 scope.go:117] "RemoveContainer" containerID="72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.093329 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5899d7d557-x6pfg" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.094865 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95617a83-815e-4e5d-9b7e-4d3bec591ed8","Type":"ContainerStarted","Data":"05e1491045594badb50eafa8562c718ffce59354825d77889a238578d67e8b72"} Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.095136 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.096492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" event={"ID":"908ed279-9514-43e0-a6a7-2ed24cfe34da","Type":"ContainerStarted","Data":"5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8"} Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.108112 4881 scope.go:117] "RemoveContainer" containerID="05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.132680 4881 scope.go:117] "RemoveContainer" containerID="72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798" Jan 26 12:58:15 crc kubenswrapper[4881]: E0126 12:58:15.134338 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798\": container with ID starting with 72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798 not found: ID does not exist" containerID="72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.134371 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798"} err="failed to get container status \"72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798\": rpc error: code = NotFound desc = could not find container \"72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798\": container with ID starting with 72b203becafcb2b6d086454a01ac85d540341c972b09a194c7fe79ed92e48798 not found: ID does not exist" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.134390 4881 scope.go:117] "RemoveContainer" containerID="05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80" Jan 26 12:58:15 crc kubenswrapper[4881]: E0126 12:58:15.134730 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80\": container with ID starting with 05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80 not found: ID does not exist" containerID="05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.134751 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80"} err="failed to get container status \"05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80\": rpc error: code = NotFound desc = could not find container \"05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80\": container with ID starting with 05fd77fea9c4ce966694805995e7a17f2d8749e95637b8ea65f10f7168d07d80 not found: ID does not exist" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.136136 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.857343248 podStartE2EDuration="43.136116203s" podCreationTimestamp="2026-01-26 12:57:32 +0000 UTC" firstStartedPulling="2026-01-26 12:57:42.204187849 +0000 UTC m=+1334.683497865" lastFinishedPulling="2026-01-26 12:58:14.482960794 +0000 UTC m=+1366.962270820" observedRunningTime="2026-01-26 12:58:15.118259948 +0000 UTC m=+1367.597569974" watchObservedRunningTime="2026-01-26 12:58:15.136116203 +0000 UTC m=+1367.615426229" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.149787 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" podStartSLOduration=3.149770566 podStartE2EDuration="3.149770566s" podCreationTimestamp="2026-01-26 12:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:15.141208527 +0000 UTC m=+1367.620518543" watchObservedRunningTime="2026-01-26 12:58:15.149770566 +0000 UTC m=+1367.629080592" Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.156150 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-x6pfg"] Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.163037 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-x6pfg"] Jan 26 12:58:15 crc kubenswrapper[4881]: I0126 12:58:15.520040 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:15 crc kubenswrapper[4881]: E0126 12:58:15.520496 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 12:58:15 crc kubenswrapper[4881]: E0126 12:58:15.520530 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 12:58:15 crc kubenswrapper[4881]: E0126 12:58:15.520579 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift podName:e234f178-5499-441d-923d-26a5a7cbfe04 nodeName:}" failed. No retries permitted until 2026-01-26 12:58:17.520563539 +0000 UTC m=+1369.999873575 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift") pod "swift-storage-0" (UID: "e234f178-5499-441d-923d-26a5a7cbfe04") : configmap "swift-ring-files" not found Jan 26 12:58:16 crc kubenswrapper[4881]: I0126 12:58:16.113894 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c9ded6c-9f0e-46af-be8e-4c30033f32df" path="/var/lib/kubelet/pods/7c9ded6c-9f0e-46af-be8e-4c30033f32df/volumes" Jan 26 12:58:16 crc kubenswrapper[4881]: I0126 12:58:16.114571 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:16 crc kubenswrapper[4881]: I0126 12:58:16.429196 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 12:58:16 crc kubenswrapper[4881]: I0126 12:58:16.714419 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.564909 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:17 crc kubenswrapper[4881]: E0126 12:58:17.565134 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 12:58:17 crc kubenswrapper[4881]: E0126 12:58:17.565270 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 12:58:17 crc kubenswrapper[4881]: E0126 12:58:17.565326 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift podName:e234f178-5499-441d-923d-26a5a7cbfe04 nodeName:}" failed. No retries permitted until 2026-01-26 12:58:21.565308735 +0000 UTC m=+1374.044618761 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift") pod "swift-storage-0" (UID: "e234f178-5499-441d-923d-26a5a7cbfe04") : configmap "swift-ring-files" not found Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.722545 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dmkh9"] Jan 26 12:58:17 crc kubenswrapper[4881]: E0126 12:58:17.722996 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9ded6c-9f0e-46af-be8e-4c30033f32df" containerName="dnsmasq-dns" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.723014 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9ded6c-9f0e-46af-be8e-4c30033f32df" containerName="dnsmasq-dns" Jan 26 12:58:17 crc kubenswrapper[4881]: E0126 12:58:17.723036 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9ded6c-9f0e-46af-be8e-4c30033f32df" containerName="init" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.723044 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9ded6c-9f0e-46af-be8e-4c30033f32df" containerName="init" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.723257 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9ded6c-9f0e-46af-be8e-4c30033f32df" containerName="dnsmasq-dns" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.723930 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.725904 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.726583 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.727778 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.733983 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dmkh9"] Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.871215 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-969sk\" (UniqueName: \"kubernetes.io/projected/14ccca3b-65a8-4df1-9905-b21bfb24e5be-kube-api-access-969sk\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.871313 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/14ccca3b-65a8-4df1-9905-b21bfb24e5be-etc-swift\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.871354 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-combined-ca-bundle\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.871378 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-ring-data-devices\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.871501 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-scripts\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.871660 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-swiftconf\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.871707 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-dispersionconf\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.973199 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-969sk\" (UniqueName: \"kubernetes.io/projected/14ccca3b-65a8-4df1-9905-b21bfb24e5be-kube-api-access-969sk\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.973383 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/14ccca3b-65a8-4df1-9905-b21bfb24e5be-etc-swift\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.973500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-combined-ca-bundle\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.973598 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-ring-data-devices\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.973654 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-scripts\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.973762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-swiftconf\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.973814 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-dispersionconf\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.975098 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-ring-data-devices\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.975296 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-scripts\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.975803 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/14ccca3b-65a8-4df1-9905-b21bfb24e5be-etc-swift\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.984908 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-dispersionconf\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.985788 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-combined-ca-bundle\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:17 crc kubenswrapper[4881]: I0126 12:58:17.991687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-swiftconf\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:18 crc kubenswrapper[4881]: I0126 12:58:18.005802 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-969sk\" (UniqueName: \"kubernetes.io/projected/14ccca3b-65a8-4df1-9905-b21bfb24e5be-kube-api-access-969sk\") pod \"swift-ring-rebalance-dmkh9\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:18 crc kubenswrapper[4881]: I0126 12:58:18.059313 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:18 crc kubenswrapper[4881]: I0126 12:58:18.586171 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dmkh9"] Jan 26 12:58:18 crc kubenswrapper[4881]: I0126 12:58:18.860431 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 12:58:18 crc kubenswrapper[4881]: I0126 12:58:18.860499 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.059614 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f8kct"] Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.061808 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f8kct" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.065849 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.089331 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f8kct"] Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.119343 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.149085 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dmkh9" event={"ID":"14ccca3b-65a8-4df1-9905-b21bfb24e5be","Type":"ContainerStarted","Data":"e8cd01a0e674feb9ee6a26208d64331d49a1bdb062576cdda57e83d929d41ef4"} Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.151275 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1067dd91-d79f-4165-8c6e-e3309dff7d26","Type":"ContainerStarted","Data":"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5"} Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.196550 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7337bc3f-e4be-4c3f-a2c1-61748291f085-operator-scripts\") pod \"root-account-create-update-f8kct\" (UID: \"7337bc3f-e4be-4c3f-a2c1-61748291f085\") " pod="openstack/root-account-create-update-f8kct" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.196624 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rspx4\" (UniqueName: \"kubernetes.io/projected/7337bc3f-e4be-4c3f-a2c1-61748291f085-kube-api-access-rspx4\") pod \"root-account-create-update-f8kct\" (UID: \"7337bc3f-e4be-4c3f-a2c1-61748291f085\") " pod="openstack/root-account-create-update-f8kct" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.284833 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.306619 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7337bc3f-e4be-4c3f-a2c1-61748291f085-operator-scripts\") pod \"root-account-create-update-f8kct\" (UID: \"7337bc3f-e4be-4c3f-a2c1-61748291f085\") " pod="openstack/root-account-create-update-f8kct" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.306725 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rspx4\" (UniqueName: \"kubernetes.io/projected/7337bc3f-e4be-4c3f-a2c1-61748291f085-kube-api-access-rspx4\") pod \"root-account-create-update-f8kct\" (UID: \"7337bc3f-e4be-4c3f-a2c1-61748291f085\") " pod="openstack/root-account-create-update-f8kct" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.307622 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7337bc3f-e4be-4c3f-a2c1-61748291f085-operator-scripts\") pod \"root-account-create-update-f8kct\" (UID: \"7337bc3f-e4be-4c3f-a2c1-61748291f085\") " pod="openstack/root-account-create-update-f8kct" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.345063 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rspx4\" (UniqueName: \"kubernetes.io/projected/7337bc3f-e4be-4c3f-a2c1-61748291f085-kube-api-access-rspx4\") pod \"root-account-create-update-f8kct\" (UID: \"7337bc3f-e4be-4c3f-a2c1-61748291f085\") " pod="openstack/root-account-create-update-f8kct" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.384428 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f8kct" Jan 26 12:58:19 crc kubenswrapper[4881]: I0126 12:58:19.808989 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f8kct"] Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.193487 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wd8nz"] Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.194708 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wd8nz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.199890 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wd8nz"] Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.316117 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-37af-account-create-update-5p95p"] Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.317195 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-37af-account-create-update-5p95p" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.318855 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.333961 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-37af-account-create-update-5p95p"] Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.336462 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7gxm\" (UniqueName: \"kubernetes.io/projected/4ead8e57-515d-4130-a679-ee2a2a148e45-kube-api-access-n7gxm\") pod \"keystone-db-create-wd8nz\" (UID: \"4ead8e57-515d-4130-a679-ee2a2a148e45\") " pod="openstack/keystone-db-create-wd8nz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.336774 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ead8e57-515d-4130-a679-ee2a2a148e45-operator-scripts\") pod \"keystone-db-create-wd8nz\" (UID: \"4ead8e57-515d-4130-a679-ee2a2a148e45\") " pod="openstack/keystone-db-create-wd8nz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.438600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7gxm\" (UniqueName: \"kubernetes.io/projected/4ead8e57-515d-4130-a679-ee2a2a148e45-kube-api-access-n7gxm\") pod \"keystone-db-create-wd8nz\" (UID: \"4ead8e57-515d-4130-a679-ee2a2a148e45\") " pod="openstack/keystone-db-create-wd8nz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.438706 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad31881f-a01e-473c-a495-612e76bf3ecf-operator-scripts\") pod \"keystone-37af-account-create-update-5p95p\" (UID: \"ad31881f-a01e-473c-a495-612e76bf3ecf\") " pod="openstack/keystone-37af-account-create-update-5p95p" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.438740 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dkj2\" (UniqueName: \"kubernetes.io/projected/ad31881f-a01e-473c-a495-612e76bf3ecf-kube-api-access-4dkj2\") pod \"keystone-37af-account-create-update-5p95p\" (UID: \"ad31881f-a01e-473c-a495-612e76bf3ecf\") " pod="openstack/keystone-37af-account-create-update-5p95p" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.438760 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ead8e57-515d-4130-a679-ee2a2a148e45-operator-scripts\") pod \"keystone-db-create-wd8nz\" (UID: \"4ead8e57-515d-4130-a679-ee2a2a148e45\") " pod="openstack/keystone-db-create-wd8nz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.439635 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ead8e57-515d-4130-a679-ee2a2a148e45-operator-scripts\") pod \"keystone-db-create-wd8nz\" (UID: \"4ead8e57-515d-4130-a679-ee2a2a148e45\") " pod="openstack/keystone-db-create-wd8nz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.444965 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-dzvpm"] Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.446229 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dzvpm" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.466963 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7gxm\" (UniqueName: \"kubernetes.io/projected/4ead8e57-515d-4130-a679-ee2a2a148e45-kube-api-access-n7gxm\") pod \"keystone-db-create-wd8nz\" (UID: \"4ead8e57-515d-4130-a679-ee2a2a148e45\") " pod="openstack/keystone-db-create-wd8nz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.467021 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dzvpm"] Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.519878 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d805-account-create-update-r5zgz"] Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.521259 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d805-account-create-update-r5zgz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.523282 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.529280 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d805-account-create-update-r5zgz"] Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.540803 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad31881f-a01e-473c-a495-612e76bf3ecf-operator-scripts\") pod \"keystone-37af-account-create-update-5p95p\" (UID: \"ad31881f-a01e-473c-a495-612e76bf3ecf\") " pod="openstack/keystone-37af-account-create-update-5p95p" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.540856 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dkj2\" (UniqueName: \"kubernetes.io/projected/ad31881f-a01e-473c-a495-612e76bf3ecf-kube-api-access-4dkj2\") pod \"keystone-37af-account-create-update-5p95p\" (UID: \"ad31881f-a01e-473c-a495-612e76bf3ecf\") " pod="openstack/keystone-37af-account-create-update-5p95p" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.540892 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01a846b5-2564-476e-aed0-d658864b48cc-operator-scripts\") pod \"placement-db-create-dzvpm\" (UID: \"01a846b5-2564-476e-aed0-d658864b48cc\") " pod="openstack/placement-db-create-dzvpm" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.540917 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28zqf\" (UniqueName: \"kubernetes.io/projected/01a846b5-2564-476e-aed0-d658864b48cc-kube-api-access-28zqf\") pod \"placement-db-create-dzvpm\" (UID: \"01a846b5-2564-476e-aed0-d658864b48cc\") " pod="openstack/placement-db-create-dzvpm" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.541678 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad31881f-a01e-473c-a495-612e76bf3ecf-operator-scripts\") pod \"keystone-37af-account-create-update-5p95p\" (UID: \"ad31881f-a01e-473c-a495-612e76bf3ecf\") " pod="openstack/keystone-37af-account-create-update-5p95p" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.554666 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wd8nz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.556209 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dkj2\" (UniqueName: \"kubernetes.io/projected/ad31881f-a01e-473c-a495-612e76bf3ecf-kube-api-access-4dkj2\") pod \"keystone-37af-account-create-update-5p95p\" (UID: \"ad31881f-a01e-473c-a495-612e76bf3ecf\") " pod="openstack/keystone-37af-account-create-update-5p95p" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.639034 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-37af-account-create-update-5p95p" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.642404 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28zqf\" (UniqueName: \"kubernetes.io/projected/01a846b5-2564-476e-aed0-d658864b48cc-kube-api-access-28zqf\") pod \"placement-db-create-dzvpm\" (UID: \"01a846b5-2564-476e-aed0-d658864b48cc\") " pod="openstack/placement-db-create-dzvpm" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.642453 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7efba59-5af3-4544-9121-8a8a88859aea-operator-scripts\") pod \"placement-d805-account-create-update-r5zgz\" (UID: \"d7efba59-5af3-4544-9121-8a8a88859aea\") " pod="openstack/placement-d805-account-create-update-r5zgz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.642718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz55g\" (UniqueName: \"kubernetes.io/projected/d7efba59-5af3-4544-9121-8a8a88859aea-kube-api-access-dz55g\") pod \"placement-d805-account-create-update-r5zgz\" (UID: \"d7efba59-5af3-4544-9121-8a8a88859aea\") " pod="openstack/placement-d805-account-create-update-r5zgz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.642875 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01a846b5-2564-476e-aed0-d658864b48cc-operator-scripts\") pod \"placement-db-create-dzvpm\" (UID: \"01a846b5-2564-476e-aed0-d658864b48cc\") " pod="openstack/placement-db-create-dzvpm" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.643748 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01a846b5-2564-476e-aed0-d658864b48cc-operator-scripts\") pod \"placement-db-create-dzvpm\" (UID: \"01a846b5-2564-476e-aed0-d658864b48cc\") " pod="openstack/placement-db-create-dzvpm" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.660981 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28zqf\" (UniqueName: \"kubernetes.io/projected/01a846b5-2564-476e-aed0-d658864b48cc-kube-api-access-28zqf\") pod \"placement-db-create-dzvpm\" (UID: \"01a846b5-2564-476e-aed0-d658864b48cc\") " pod="openstack/placement-db-create-dzvpm" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.745203 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz55g\" (UniqueName: \"kubernetes.io/projected/d7efba59-5af3-4544-9121-8a8a88859aea-kube-api-access-dz55g\") pod \"placement-d805-account-create-update-r5zgz\" (UID: \"d7efba59-5af3-4544-9121-8a8a88859aea\") " pod="openstack/placement-d805-account-create-update-r5zgz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.745305 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7efba59-5af3-4544-9121-8a8a88859aea-operator-scripts\") pod \"placement-d805-account-create-update-r5zgz\" (UID: \"d7efba59-5af3-4544-9121-8a8a88859aea\") " pod="openstack/placement-d805-account-create-update-r5zgz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.746118 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7efba59-5af3-4544-9121-8a8a88859aea-operator-scripts\") pod \"placement-d805-account-create-update-r5zgz\" (UID: \"d7efba59-5af3-4544-9121-8a8a88859aea\") " pod="openstack/placement-d805-account-create-update-r5zgz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.768969 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz55g\" (UniqueName: \"kubernetes.io/projected/d7efba59-5af3-4544-9121-8a8a88859aea-kube-api-access-dz55g\") pod \"placement-d805-account-create-update-r5zgz\" (UID: \"d7efba59-5af3-4544-9121-8a8a88859aea\") " pod="openstack/placement-d805-account-create-update-r5zgz" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.816140 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dzvpm" Jan 26 12:58:20 crc kubenswrapper[4881]: I0126 12:58:20.834329 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d805-account-create-update-r5zgz" Jan 26 12:58:21 crc kubenswrapper[4881]: I0126 12:58:21.661789 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:21 crc kubenswrapper[4881]: E0126 12:58:21.661981 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 12:58:21 crc kubenswrapper[4881]: E0126 12:58:21.662102 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 12:58:21 crc kubenswrapper[4881]: E0126 12:58:21.662151 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift podName:e234f178-5499-441d-923d-26a5a7cbfe04 nodeName:}" failed. No retries permitted until 2026-01-26 12:58:29.662134766 +0000 UTC m=+1382.141444792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift") pod "swift-storage-0" (UID: "e234f178-5499-441d-923d-26a5a7cbfe04") : configmap "swift-ring-files" not found Jan 26 12:58:21 crc kubenswrapper[4881]: W0126 12:58:21.987936 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7337bc3f_e4be_4c3f_a2c1_61748291f085.slice/crio-41680d81c8ad723e821254d58135da04765562f7a00ffe60dc06fab38d525945 WatchSource:0}: Error finding container 41680d81c8ad723e821254d58135da04765562f7a00ffe60dc06fab38d525945: Status 404 returned error can't find the container with id 41680d81c8ad723e821254d58135da04765562f7a00ffe60dc06fab38d525945 Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.190029 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f8kct" event={"ID":"7337bc3f-e4be-4c3f-a2c1-61748291f085","Type":"ContainerStarted","Data":"41680d81c8ad723e821254d58135da04765562f7a00ffe60dc06fab38d525945"} Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.582258 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-660e-account-create-update-6rxjm"] Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.583799 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-660e-account-create-update-6rxjm" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.586683 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.591840 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-b46kb"] Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.593036 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-b46kb" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.599145 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.602102 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-660e-account-create-update-6rxjm"] Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.609833 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-b46kb"] Jan 26 12:58:22 crc kubenswrapper[4881]: W0126 12:58:22.628808 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7efba59_5af3_4544_9121_8a8a88859aea.slice/crio-8d83ea8fcd2cc677aa73baea77c4ae10199f4f684ccad0139c39160fe41729ef WatchSource:0}: Error finding container 8d83ea8fcd2cc677aa73baea77c4ae10199f4f684ccad0139c39160fe41729ef: Status 404 returned error can't find the container with id 8d83ea8fcd2cc677aa73baea77c4ae10199f4f684ccad0139c39160fe41729ef Jan 26 12:58:22 crc kubenswrapper[4881]: W0126 12:58:22.630648 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad31881f_a01e_473c_a495_612e76bf3ecf.slice/crio-23a81706089e69a6446bbb7d376ab0daf778284114dcdee2782a7ba53d4a4391 WatchSource:0}: Error finding container 23a81706089e69a6446bbb7d376ab0daf778284114dcdee2782a7ba53d4a4391: Status 404 returned error can't find the container with id 23a81706089e69a6446bbb7d376ab0daf778284114dcdee2782a7ba53d4a4391 Jan 26 12:58:22 crc kubenswrapper[4881]: W0126 12:58:22.637585 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a846b5_2564_476e_aed0_d658864b48cc.slice/crio-948d846ec2b00cb0b128802296c2fc58d466ab2307e1653399edc46ae957288a WatchSource:0}: Error finding container 948d846ec2b00cb0b128802296c2fc58d466ab2307e1653399edc46ae957288a: Status 404 returned error can't find the container with id 948d846ec2b00cb0b128802296c2fc58d466ab2307e1653399edc46ae957288a Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.640611 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d805-account-create-update-r5zgz"] Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.649325 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-37af-account-create-update-5p95p"] Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.669230 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dzvpm"] Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.689407 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghzqb\" (UniqueName: \"kubernetes.io/projected/6b4c479a-9fc9-4021-9557-c85b90ee39a3-kube-api-access-ghzqb\") pod \"watcher-660e-account-create-update-6rxjm\" (UID: \"6b4c479a-9fc9-4021-9557-c85b90ee39a3\") " pod="openstack/watcher-660e-account-create-update-6rxjm" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.689498 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b4c479a-9fc9-4021-9557-c85b90ee39a3-operator-scripts\") pod \"watcher-660e-account-create-update-6rxjm\" (UID: \"6b4c479a-9fc9-4021-9557-c85b90ee39a3\") " pod="openstack/watcher-660e-account-create-update-6rxjm" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.689549 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df7tn\" (UniqueName: \"kubernetes.io/projected/2cb1e605-5606-436c-8095-47263b851c49-kube-api-access-df7tn\") pod \"watcher-db-create-b46kb\" (UID: \"2cb1e605-5606-436c-8095-47263b851c49\") " pod="openstack/watcher-db-create-b46kb" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.689620 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb1e605-5606-436c-8095-47263b851c49-operator-scripts\") pod \"watcher-db-create-b46kb\" (UID: \"2cb1e605-5606-436c-8095-47263b851c49\") " pod="openstack/watcher-db-create-b46kb" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.737535 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wd8nz"] Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.791675 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghzqb\" (UniqueName: \"kubernetes.io/projected/6b4c479a-9fc9-4021-9557-c85b90ee39a3-kube-api-access-ghzqb\") pod \"watcher-660e-account-create-update-6rxjm\" (UID: \"6b4c479a-9fc9-4021-9557-c85b90ee39a3\") " pod="openstack/watcher-660e-account-create-update-6rxjm" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.791729 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b4c479a-9fc9-4021-9557-c85b90ee39a3-operator-scripts\") pod \"watcher-660e-account-create-update-6rxjm\" (UID: \"6b4c479a-9fc9-4021-9557-c85b90ee39a3\") " pod="openstack/watcher-660e-account-create-update-6rxjm" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.791751 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df7tn\" (UniqueName: \"kubernetes.io/projected/2cb1e605-5606-436c-8095-47263b851c49-kube-api-access-df7tn\") pod \"watcher-db-create-b46kb\" (UID: \"2cb1e605-5606-436c-8095-47263b851c49\") " pod="openstack/watcher-db-create-b46kb" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.791801 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb1e605-5606-436c-8095-47263b851c49-operator-scripts\") pod \"watcher-db-create-b46kb\" (UID: \"2cb1e605-5606-436c-8095-47263b851c49\") " pod="openstack/watcher-db-create-b46kb" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.792770 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b4c479a-9fc9-4021-9557-c85b90ee39a3-operator-scripts\") pod \"watcher-660e-account-create-update-6rxjm\" (UID: \"6b4c479a-9fc9-4021-9557-c85b90ee39a3\") " pod="openstack/watcher-660e-account-create-update-6rxjm" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.795209 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb1e605-5606-436c-8095-47263b851c49-operator-scripts\") pod \"watcher-db-create-b46kb\" (UID: \"2cb1e605-5606-436c-8095-47263b851c49\") " pod="openstack/watcher-db-create-b46kb" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.811143 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df7tn\" (UniqueName: \"kubernetes.io/projected/2cb1e605-5606-436c-8095-47263b851c49-kube-api-access-df7tn\") pod \"watcher-db-create-b46kb\" (UID: \"2cb1e605-5606-436c-8095-47263b851c49\") " pod="openstack/watcher-db-create-b46kb" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.819157 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghzqb\" (UniqueName: \"kubernetes.io/projected/6b4c479a-9fc9-4021-9557-c85b90ee39a3-kube-api-access-ghzqb\") pod \"watcher-660e-account-create-update-6rxjm\" (UID: \"6b4c479a-9fc9-4021-9557-c85b90ee39a3\") " pod="openstack/watcher-660e-account-create-update-6rxjm" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.908897 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-660e-account-create-update-6rxjm" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.920702 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-b46kb" Jan 26 12:58:22 crc kubenswrapper[4881]: I0126 12:58:22.942675 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.010105 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57467f675c-tz7zs"] Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.010497 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" podUID="fd528ad5-ee88-4a39-b948-6364fa84fbe9" containerName="dnsmasq-dns" containerID="cri-o://07cb88ca2367e29027f94d6cf970fce114174128ff7c0c4fcf90d7cae8a4c0e0" gracePeriod=10 Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.238479 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wd8nz" event={"ID":"4ead8e57-515d-4130-a679-ee2a2a148e45","Type":"ContainerStarted","Data":"f73cbde60c8c592e4bfaa86c59e453cd9e1d63c0f1c3d8a0da2c2b1ee487f975"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.238736 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wd8nz" event={"ID":"4ead8e57-515d-4130-a679-ee2a2a148e45","Type":"ContainerStarted","Data":"11f5e8ce7580320da8baf161ae88d207369a6007abac36fb82337d4e25d07598"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.240706 4881 generic.go:334] "Generic (PLEG): container finished" podID="fd528ad5-ee88-4a39-b948-6364fa84fbe9" containerID="07cb88ca2367e29027f94d6cf970fce114174128ff7c0c4fcf90d7cae8a4c0e0" exitCode=0 Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.240793 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" event={"ID":"fd528ad5-ee88-4a39-b948-6364fa84fbe9","Type":"ContainerDied","Data":"07cb88ca2367e29027f94d6cf970fce114174128ff7c0c4fcf90d7cae8a4c0e0"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.248400 4881 generic.go:334] "Generic (PLEG): container finished" podID="7337bc3f-e4be-4c3f-a2c1-61748291f085" containerID="fd520d6bd78adb4ac5c7506b5231dfec9cbeb7c3df32e3b013567655502a5fbf" exitCode=0 Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.248458 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f8kct" event={"ID":"7337bc3f-e4be-4c3f-a2c1-61748291f085","Type":"ContainerDied","Data":"fd520d6bd78adb4ac5c7506b5231dfec9cbeb7c3df32e3b013567655502a5fbf"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.255030 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d805-account-create-update-r5zgz" event={"ID":"d7efba59-5af3-4544-9121-8a8a88859aea","Type":"ContainerStarted","Data":"90d07ff7c7b86b3c241fd96dc9de8bed049bbf6f08f370febdd263fd653de7ce"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.255070 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d805-account-create-update-r5zgz" event={"ID":"d7efba59-5af3-4544-9121-8a8a88859aea","Type":"ContainerStarted","Data":"8d83ea8fcd2cc677aa73baea77c4ae10199f4f684ccad0139c39160fe41729ef"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.258212 4881 generic.go:334] "Generic (PLEG): container finished" podID="01a846b5-2564-476e-aed0-d658864b48cc" containerID="86bb777ab8e7552d57653bd091314db34a81fa654abe75fe9120c0df42d3eca0" exitCode=0 Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.258360 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dzvpm" event={"ID":"01a846b5-2564-476e-aed0-d658864b48cc","Type":"ContainerDied","Data":"86bb777ab8e7552d57653bd091314db34a81fa654abe75fe9120c0df42d3eca0"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.258382 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dzvpm" event={"ID":"01a846b5-2564-476e-aed0-d658864b48cc","Type":"ContainerStarted","Data":"948d846ec2b00cb0b128802296c2fc58d466ab2307e1653399edc46ae957288a"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.260194 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-37af-account-create-update-5p95p" event={"ID":"ad31881f-a01e-473c-a495-612e76bf3ecf","Type":"ContainerStarted","Data":"8bb3e925f67509b4bf3e46fc3adf2e07a3cbfe351f33b717e77d82d679f6defe"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.260229 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-37af-account-create-update-5p95p" event={"ID":"ad31881f-a01e-473c-a495-612e76bf3ecf","Type":"ContainerStarted","Data":"23a81706089e69a6446bbb7d376ab0daf778284114dcdee2782a7ba53d4a4391"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.261974 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dmkh9" event={"ID":"14ccca3b-65a8-4df1-9905-b21bfb24e5be","Type":"ContainerStarted","Data":"c248528e4842b1434de470c44bdba72be3e538197e04f83ddbbcec4722fed6b1"} Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.263280 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-wd8nz" podStartSLOduration=3.263263714 podStartE2EDuration="3.263263714s" podCreationTimestamp="2026-01-26 12:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:23.253805633 +0000 UTC m=+1375.733115659" watchObservedRunningTime="2026-01-26 12:58:23.263263714 +0000 UTC m=+1375.742573740" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.298359 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d805-account-create-update-r5zgz" podStartSLOduration=3.298322309 podStartE2EDuration="3.298322309s" podCreationTimestamp="2026-01-26 12:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:23.280800341 +0000 UTC m=+1375.760110367" watchObservedRunningTime="2026-01-26 12:58:23.298322309 +0000 UTC m=+1375.777632335" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.302267 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-37af-account-create-update-5p95p" podStartSLOduration=3.302259575 podStartE2EDuration="3.302259575s" podCreationTimestamp="2026-01-26 12:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:23.296270919 +0000 UTC m=+1375.775580945" watchObservedRunningTime="2026-01-26 12:58:23.302259575 +0000 UTC m=+1375.781569601" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.319647 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dmkh9" podStartSLOduration=2.769600821 podStartE2EDuration="6.319633748s" podCreationTimestamp="2026-01-26 12:58:17 +0000 UTC" firstStartedPulling="2026-01-26 12:58:18.592750432 +0000 UTC m=+1371.072060458" lastFinishedPulling="2026-01-26 12:58:22.142783359 +0000 UTC m=+1374.622093385" observedRunningTime="2026-01-26 12:58:23.316606885 +0000 UTC m=+1375.795916921" watchObservedRunningTime="2026-01-26 12:58:23.319633748 +0000 UTC m=+1375.798943774" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.461000 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-660e-account-create-update-6rxjm"] Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.518983 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-b46kb"] Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.646995 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.815332 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-dns-svc\") pod \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.815416 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gms7m\" (UniqueName: \"kubernetes.io/projected/fd528ad5-ee88-4a39-b948-6364fa84fbe9-kube-api-access-gms7m\") pod \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.815451 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-config\") pod \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\" (UID: \"fd528ad5-ee88-4a39-b948-6364fa84fbe9\") " Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.825876 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd528ad5-ee88-4a39-b948-6364fa84fbe9-kube-api-access-gms7m" (OuterVolumeSpecName: "kube-api-access-gms7m") pod "fd528ad5-ee88-4a39-b948-6364fa84fbe9" (UID: "fd528ad5-ee88-4a39-b948-6364fa84fbe9"). InnerVolumeSpecName "kube-api-access-gms7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.863350 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-config" (OuterVolumeSpecName: "config") pod "fd528ad5-ee88-4a39-b948-6364fa84fbe9" (UID: "fd528ad5-ee88-4a39-b948-6364fa84fbe9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.870475 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd528ad5-ee88-4a39-b948-6364fa84fbe9" (UID: "fd528ad5-ee88-4a39-b948-6364fa84fbe9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.918067 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.918608 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gms7m\" (UniqueName: \"kubernetes.io/projected/fd528ad5-ee88-4a39-b948-6364fa84fbe9-kube-api-access-gms7m\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:23 crc kubenswrapper[4881]: I0126 12:58:23.918618 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd528ad5-ee88-4a39-b948-6364fa84fbe9-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.271811 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-b46kb" event={"ID":"2cb1e605-5606-436c-8095-47263b851c49","Type":"ContainerDied","Data":"0af25594a404692287871a6ca5b698ccd06291967a2d575c4dd54b27fafdf611"} Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.271620 4881 generic.go:334] "Generic (PLEG): container finished" podID="2cb1e605-5606-436c-8095-47263b851c49" containerID="0af25594a404692287871a6ca5b698ccd06291967a2d575c4dd54b27fafdf611" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.272258 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-b46kb" event={"ID":"2cb1e605-5606-436c-8095-47263b851c49","Type":"ContainerStarted","Data":"36d8dd16c863b3c754beb8d7519c7a9d24573c70f498c031afe019554a13adc6"} Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.274274 4881 generic.go:334] "Generic (PLEG): container finished" podID="ad31881f-a01e-473c-a495-612e76bf3ecf" containerID="8bb3e925f67509b4bf3e46fc3adf2e07a3cbfe351f33b717e77d82d679f6defe" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.274339 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-37af-account-create-update-5p95p" event={"ID":"ad31881f-a01e-473c-a495-612e76bf3ecf","Type":"ContainerDied","Data":"8bb3e925f67509b4bf3e46fc3adf2e07a3cbfe351f33b717e77d82d679f6defe"} Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.276635 4881 generic.go:334] "Generic (PLEG): container finished" podID="4ead8e57-515d-4130-a679-ee2a2a148e45" containerID="f73cbde60c8c592e4bfaa86c59e453cd9e1d63c0f1c3d8a0da2c2b1ee487f975" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.276697 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wd8nz" event={"ID":"4ead8e57-515d-4130-a679-ee2a2a148e45","Type":"ContainerDied","Data":"f73cbde60c8c592e4bfaa86c59e453cd9e1d63c0f1c3d8a0da2c2b1ee487f975"} Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.279125 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" event={"ID":"fd528ad5-ee88-4a39-b948-6364fa84fbe9","Type":"ContainerDied","Data":"06a30068d8d2b70752deffa4afa6fabc01b0749dca5f49e7b2c4f92b9ef9c834"} Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.279176 4881 scope.go:117] "RemoveContainer" containerID="07cb88ca2367e29027f94d6cf970fce114174128ff7c0c4fcf90d7cae8a4c0e0" Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.279206 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57467f675c-tz7zs" Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.287333 4881 generic.go:334] "Generic (PLEG): container finished" podID="6b4c479a-9fc9-4021-9557-c85b90ee39a3" containerID="505a889e439fd7438d7783206924e15a93e5af2528f0d6e63446242b47c433b2" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.287399 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-660e-account-create-update-6rxjm" event={"ID":"6b4c479a-9fc9-4021-9557-c85b90ee39a3","Type":"ContainerDied","Data":"505a889e439fd7438d7783206924e15a93e5af2528f0d6e63446242b47c433b2"} Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.287422 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-660e-account-create-update-6rxjm" event={"ID":"6b4c479a-9fc9-4021-9557-c85b90ee39a3","Type":"ContainerStarted","Data":"ff8fa572e205f90066aadabb3eecb05f1b6e8e71a9da5c3f8000d521fec1547e"} Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.291617 4881 generic.go:334] "Generic (PLEG): container finished" podID="d7efba59-5af3-4544-9121-8a8a88859aea" containerID="90d07ff7c7b86b3c241fd96dc9de8bed049bbf6f08f370febdd263fd653de7ce" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.291761 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d805-account-create-update-r5zgz" event={"ID":"d7efba59-5af3-4544-9121-8a8a88859aea","Type":"ContainerDied","Data":"90d07ff7c7b86b3c241fd96dc9de8bed049bbf6f08f370febdd263fd653de7ce"} Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.309181 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57467f675c-tz7zs"] Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.324770 4881 scope.go:117] "RemoveContainer" containerID="edb6c8d58601dc83f3eaacf79c42081cd4a110b07a85393d43bae09600343456" Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.331194 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57467f675c-tz7zs"] Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.776935 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dzvpm" Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.783343 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f8kct" Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.937063 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01a846b5-2564-476e-aed0-d658864b48cc-operator-scripts\") pod \"01a846b5-2564-476e-aed0-d658864b48cc\" (UID: \"01a846b5-2564-476e-aed0-d658864b48cc\") " Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.937163 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28zqf\" (UniqueName: \"kubernetes.io/projected/01a846b5-2564-476e-aed0-d658864b48cc-kube-api-access-28zqf\") pod \"01a846b5-2564-476e-aed0-d658864b48cc\" (UID: \"01a846b5-2564-476e-aed0-d658864b48cc\") " Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.937199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7337bc3f-e4be-4c3f-a2c1-61748291f085-operator-scripts\") pod \"7337bc3f-e4be-4c3f-a2c1-61748291f085\" (UID: \"7337bc3f-e4be-4c3f-a2c1-61748291f085\") " Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.937272 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rspx4\" (UniqueName: \"kubernetes.io/projected/7337bc3f-e4be-4c3f-a2c1-61748291f085-kube-api-access-rspx4\") pod \"7337bc3f-e4be-4c3f-a2c1-61748291f085\" (UID: \"7337bc3f-e4be-4c3f-a2c1-61748291f085\") " Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.937696 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a846b5-2564-476e-aed0-d658864b48cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01a846b5-2564-476e-aed0-d658864b48cc" (UID: "01a846b5-2564-476e-aed0-d658864b48cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.938052 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7337bc3f-e4be-4c3f-a2c1-61748291f085-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7337bc3f-e4be-4c3f-a2c1-61748291f085" (UID: "7337bc3f-e4be-4c3f-a2c1-61748291f085"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.940925 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a846b5-2564-476e-aed0-d658864b48cc-kube-api-access-28zqf" (OuterVolumeSpecName: "kube-api-access-28zqf") pod "01a846b5-2564-476e-aed0-d658864b48cc" (UID: "01a846b5-2564-476e-aed0-d658864b48cc"). InnerVolumeSpecName "kube-api-access-28zqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4881]: I0126 12:58:24.942137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7337bc3f-e4be-4c3f-a2c1-61748291f085-kube-api-access-rspx4" (OuterVolumeSpecName: "kube-api-access-rspx4") pod "7337bc3f-e4be-4c3f-a2c1-61748291f085" (UID: "7337bc3f-e4be-4c3f-a2c1-61748291f085"). InnerVolumeSpecName "kube-api-access-rspx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.039308 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28zqf\" (UniqueName: \"kubernetes.io/projected/01a846b5-2564-476e-aed0-d658864b48cc-kube-api-access-28zqf\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.039666 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7337bc3f-e4be-4c3f-a2c1-61748291f085-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.039811 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rspx4\" (UniqueName: \"kubernetes.io/projected/7337bc3f-e4be-4c3f-a2c1-61748291f085-kube-api-access-rspx4\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.040017 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01a846b5-2564-476e-aed0-d658864b48cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.303814 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dzvpm" event={"ID":"01a846b5-2564-476e-aed0-d658864b48cc","Type":"ContainerDied","Data":"948d846ec2b00cb0b128802296c2fc58d466ab2307e1653399edc46ae957288a"} Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.304289 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="948d846ec2b00cb0b128802296c2fc58d466ab2307e1653399edc46ae957288a" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.303895 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dzvpm" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.308172 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f8kct" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.315789 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f8kct" event={"ID":"7337bc3f-e4be-4c3f-a2c1-61748291f085","Type":"ContainerDied","Data":"41680d81c8ad723e821254d58135da04765562f7a00ffe60dc06fab38d525945"} Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.315865 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41680d81c8ad723e821254d58135da04765562f7a00ffe60dc06fab38d525945" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.768758 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-37af-account-create-update-5p95p" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.934262 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-b46kb" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.941288 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wd8nz" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.955051 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-660e-account-create-update-6rxjm" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.958644 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad31881f-a01e-473c-a495-612e76bf3ecf-operator-scripts\") pod \"ad31881f-a01e-473c-a495-612e76bf3ecf\" (UID: \"ad31881f-a01e-473c-a495-612e76bf3ecf\") " Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.958835 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dkj2\" (UniqueName: \"kubernetes.io/projected/ad31881f-a01e-473c-a495-612e76bf3ecf-kube-api-access-4dkj2\") pod \"ad31881f-a01e-473c-a495-612e76bf3ecf\" (UID: \"ad31881f-a01e-473c-a495-612e76bf3ecf\") " Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.960569 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad31881f-a01e-473c-a495-612e76bf3ecf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad31881f-a01e-473c-a495-612e76bf3ecf" (UID: "ad31881f-a01e-473c-a495-612e76bf3ecf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.963884 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d805-account-create-update-r5zgz" Jan 26 12:58:25 crc kubenswrapper[4881]: I0126 12:58:25.968731 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad31881f-a01e-473c-a495-612e76bf3ecf-kube-api-access-4dkj2" (OuterVolumeSpecName: "kube-api-access-4dkj2") pod "ad31881f-a01e-473c-a495-612e76bf3ecf" (UID: "ad31881f-a01e-473c-a495-612e76bf3ecf"). InnerVolumeSpecName "kube-api-access-4dkj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.060795 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ead8e57-515d-4130-a679-ee2a2a148e45-operator-scripts\") pod \"4ead8e57-515d-4130-a679-ee2a2a148e45\" (UID: \"4ead8e57-515d-4130-a679-ee2a2a148e45\") " Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.060862 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7gxm\" (UniqueName: \"kubernetes.io/projected/4ead8e57-515d-4130-a679-ee2a2a148e45-kube-api-access-n7gxm\") pod \"4ead8e57-515d-4130-a679-ee2a2a148e45\" (UID: \"4ead8e57-515d-4130-a679-ee2a2a148e45\") " Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.060881 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7efba59-5af3-4544-9121-8a8a88859aea-operator-scripts\") pod \"d7efba59-5af3-4544-9121-8a8a88859aea\" (UID: \"d7efba59-5af3-4544-9121-8a8a88859aea\") " Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.060915 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghzqb\" (UniqueName: \"kubernetes.io/projected/6b4c479a-9fc9-4021-9557-c85b90ee39a3-kube-api-access-ghzqb\") pod \"6b4c479a-9fc9-4021-9557-c85b90ee39a3\" (UID: \"6b4c479a-9fc9-4021-9557-c85b90ee39a3\") " Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.060949 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df7tn\" (UniqueName: \"kubernetes.io/projected/2cb1e605-5606-436c-8095-47263b851c49-kube-api-access-df7tn\") pod \"2cb1e605-5606-436c-8095-47263b851c49\" (UID: \"2cb1e605-5606-436c-8095-47263b851c49\") " Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.060963 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb1e605-5606-436c-8095-47263b851c49-operator-scripts\") pod \"2cb1e605-5606-436c-8095-47263b851c49\" (UID: \"2cb1e605-5606-436c-8095-47263b851c49\") " Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.061066 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b4c479a-9fc9-4021-9557-c85b90ee39a3-operator-scripts\") pod \"6b4c479a-9fc9-4021-9557-c85b90ee39a3\" (UID: \"6b4c479a-9fc9-4021-9557-c85b90ee39a3\") " Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.061150 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz55g\" (UniqueName: \"kubernetes.io/projected/d7efba59-5af3-4544-9121-8a8a88859aea-kube-api-access-dz55g\") pod \"d7efba59-5af3-4544-9121-8a8a88859aea\" (UID: \"d7efba59-5af3-4544-9121-8a8a88859aea\") " Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.061453 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dkj2\" (UniqueName: \"kubernetes.io/projected/ad31881f-a01e-473c-a495-612e76bf3ecf-kube-api-access-4dkj2\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.061470 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad31881f-a01e-473c-a495-612e76bf3ecf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.062214 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb1e605-5606-436c-8095-47263b851c49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cb1e605-5606-436c-8095-47263b851c49" (UID: "2cb1e605-5606-436c-8095-47263b851c49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.062367 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b4c479a-9fc9-4021-9557-c85b90ee39a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b4c479a-9fc9-4021-9557-c85b90ee39a3" (UID: "6b4c479a-9fc9-4021-9557-c85b90ee39a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.062501 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ead8e57-515d-4130-a679-ee2a2a148e45-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ead8e57-515d-4130-a679-ee2a2a148e45" (UID: "4ead8e57-515d-4130-a679-ee2a2a148e45"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.062920 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7efba59-5af3-4544-9121-8a8a88859aea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7efba59-5af3-4544-9121-8a8a88859aea" (UID: "d7efba59-5af3-4544-9121-8a8a88859aea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.065871 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7efba59-5af3-4544-9121-8a8a88859aea-kube-api-access-dz55g" (OuterVolumeSpecName: "kube-api-access-dz55g") pod "d7efba59-5af3-4544-9121-8a8a88859aea" (UID: "d7efba59-5af3-4544-9121-8a8a88859aea"). InnerVolumeSpecName "kube-api-access-dz55g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.065940 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ead8e57-515d-4130-a679-ee2a2a148e45-kube-api-access-n7gxm" (OuterVolumeSpecName: "kube-api-access-n7gxm") pod "4ead8e57-515d-4130-a679-ee2a2a148e45" (UID: "4ead8e57-515d-4130-a679-ee2a2a148e45"). InnerVolumeSpecName "kube-api-access-n7gxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.065986 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b4c479a-9fc9-4021-9557-c85b90ee39a3-kube-api-access-ghzqb" (OuterVolumeSpecName: "kube-api-access-ghzqb") pod "6b4c479a-9fc9-4021-9557-c85b90ee39a3" (UID: "6b4c479a-9fc9-4021-9557-c85b90ee39a3"). InnerVolumeSpecName "kube-api-access-ghzqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.067714 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb1e605-5606-436c-8095-47263b851c49-kube-api-access-df7tn" (OuterVolumeSpecName: "kube-api-access-df7tn") pod "2cb1e605-5606-436c-8095-47263b851c49" (UID: "2cb1e605-5606-436c-8095-47263b851c49"). InnerVolumeSpecName "kube-api-access-df7tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.094579 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd528ad5-ee88-4a39-b948-6364fa84fbe9" path="/var/lib/kubelet/pods/fd528ad5-ee88-4a39-b948-6364fa84fbe9/volumes" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.163453 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ead8e57-515d-4130-a679-ee2a2a148e45-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.163481 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7gxm\" (UniqueName: \"kubernetes.io/projected/4ead8e57-515d-4130-a679-ee2a2a148e45-kube-api-access-n7gxm\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.163492 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7efba59-5af3-4544-9121-8a8a88859aea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.163501 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghzqb\" (UniqueName: \"kubernetes.io/projected/6b4c479a-9fc9-4021-9557-c85b90ee39a3-kube-api-access-ghzqb\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.163522 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df7tn\" (UniqueName: \"kubernetes.io/projected/2cb1e605-5606-436c-8095-47263b851c49-kube-api-access-df7tn\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.163531 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb1e605-5606-436c-8095-47263b851c49-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.163541 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b4c479a-9fc9-4021-9557-c85b90ee39a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.163569 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz55g\" (UniqueName: \"kubernetes.io/projected/d7efba59-5af3-4544-9121-8a8a88859aea-kube-api-access-dz55g\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.329631 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wd8nz" event={"ID":"4ead8e57-515d-4130-a679-ee2a2a148e45","Type":"ContainerDied","Data":"11f5e8ce7580320da8baf161ae88d207369a6007abac36fb82337d4e25d07598"} Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.329857 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11f5e8ce7580320da8baf161ae88d207369a6007abac36fb82337d4e25d07598" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.329726 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wd8nz" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.331767 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-660e-account-create-update-6rxjm" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.331772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-660e-account-create-update-6rxjm" event={"ID":"6b4c479a-9fc9-4021-9557-c85b90ee39a3","Type":"ContainerDied","Data":"ff8fa572e205f90066aadabb3eecb05f1b6e8e71a9da5c3f8000d521fec1547e"} Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.331875 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8fa572e205f90066aadabb3eecb05f1b6e8e71a9da5c3f8000d521fec1547e" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.335588 4881 generic.go:334] "Generic (PLEG): container finished" podID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerID="ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5" exitCode=0 Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.335632 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1067dd91-d79f-4165-8c6e-e3309dff7d26","Type":"ContainerDied","Data":"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5"} Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.339329 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d805-account-create-update-r5zgz" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.339323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d805-account-create-update-r5zgz" event={"ID":"d7efba59-5af3-4544-9121-8a8a88859aea","Type":"ContainerDied","Data":"8d83ea8fcd2cc677aa73baea77c4ae10199f4f684ccad0139c39160fe41729ef"} Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.339567 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d83ea8fcd2cc677aa73baea77c4ae10199f4f684ccad0139c39160fe41729ef" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.348232 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-b46kb" event={"ID":"2cb1e605-5606-436c-8095-47263b851c49","Type":"ContainerDied","Data":"36d8dd16c863b3c754beb8d7519c7a9d24573c70f498c031afe019554a13adc6"} Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.348279 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36d8dd16c863b3c754beb8d7519c7a9d24573c70f498c031afe019554a13adc6" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.348250 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-b46kb" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.350134 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-37af-account-create-update-5p95p" event={"ID":"ad31881f-a01e-473c-a495-612e76bf3ecf","Type":"ContainerDied","Data":"23a81706089e69a6446bbb7d376ab0daf778284114dcdee2782a7ba53d4a4391"} Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.350182 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23a81706089e69a6446bbb7d376ab0daf778284114dcdee2782a7ba53d4a4391" Jan 26 12:58:26 crc kubenswrapper[4881]: I0126 12:58:26.350284 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-37af-account-create-update-5p95p" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.531605 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f8kct"] Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.539840 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f8kct"] Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.592529 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kfdqt"] Jan 26 12:58:27 crc kubenswrapper[4881]: E0126 12:58:27.592812 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7efba59-5af3-4544-9121-8a8a88859aea" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.592824 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7efba59-5af3-4544-9121-8a8a88859aea" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: E0126 12:58:27.592831 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7337bc3f-e4be-4c3f-a2c1-61748291f085" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.592837 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7337bc3f-e4be-4c3f-a2c1-61748291f085" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: E0126 12:58:27.592847 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd528ad5-ee88-4a39-b948-6364fa84fbe9" containerName="dnsmasq-dns" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.592852 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd528ad5-ee88-4a39-b948-6364fa84fbe9" containerName="dnsmasq-dns" Jan 26 12:58:27 crc kubenswrapper[4881]: E0126 12:58:27.592868 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ead8e57-515d-4130-a679-ee2a2a148e45" containerName="mariadb-database-create" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.592875 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ead8e57-515d-4130-a679-ee2a2a148e45" containerName="mariadb-database-create" Jan 26 12:58:27 crc kubenswrapper[4881]: E0126 12:58:27.592886 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad31881f-a01e-473c-a495-612e76bf3ecf" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.592892 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad31881f-a01e-473c-a495-612e76bf3ecf" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: E0126 12:58:27.592904 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b4c479a-9fc9-4021-9557-c85b90ee39a3" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.592910 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b4c479a-9fc9-4021-9557-c85b90ee39a3" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: E0126 12:58:27.592921 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd528ad5-ee88-4a39-b948-6364fa84fbe9" containerName="init" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.592926 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd528ad5-ee88-4a39-b948-6364fa84fbe9" containerName="init" Jan 26 12:58:27 crc kubenswrapper[4881]: E0126 12:58:27.592937 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb1e605-5606-436c-8095-47263b851c49" containerName="mariadb-database-create" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.592945 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb1e605-5606-436c-8095-47263b851c49" containerName="mariadb-database-create" Jan 26 12:58:27 crc kubenswrapper[4881]: E0126 12:58:27.592955 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a846b5-2564-476e-aed0-d658864b48cc" containerName="mariadb-database-create" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.592962 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a846b5-2564-476e-aed0-d658864b48cc" containerName="mariadb-database-create" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.593099 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad31881f-a01e-473c-a495-612e76bf3ecf" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.593113 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7337bc3f-e4be-4c3f-a2c1-61748291f085" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.593123 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ead8e57-515d-4130-a679-ee2a2a148e45" containerName="mariadb-database-create" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.593132 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7efba59-5af3-4544-9121-8a8a88859aea" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.593138 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b4c479a-9fc9-4021-9557-c85b90ee39a3" containerName="mariadb-account-create-update" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.593145 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd528ad5-ee88-4a39-b948-6364fa84fbe9" containerName="dnsmasq-dns" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.593152 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a846b5-2564-476e-aed0-d658864b48cc" containerName="mariadb-database-create" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.593158 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb1e605-5606-436c-8095-47263b851c49" containerName="mariadb-database-create" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.593661 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kfdqt" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.595745 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.618265 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kfdqt"] Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.696957 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ldd\" (UniqueName: \"kubernetes.io/projected/b7f94897-0613-499a-8390-cd1f850a36c9-kube-api-access-d6ldd\") pod \"root-account-create-update-kfdqt\" (UID: \"b7f94897-0613-499a-8390-cd1f850a36c9\") " pod="openstack/root-account-create-update-kfdqt" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.697010 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7f94897-0613-499a-8390-cd1f850a36c9-operator-scripts\") pod \"root-account-create-update-kfdqt\" (UID: \"b7f94897-0613-499a-8390-cd1f850a36c9\") " pod="openstack/root-account-create-update-kfdqt" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.798925 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6ldd\" (UniqueName: \"kubernetes.io/projected/b7f94897-0613-499a-8390-cd1f850a36c9-kube-api-access-d6ldd\") pod \"root-account-create-update-kfdqt\" (UID: \"b7f94897-0613-499a-8390-cd1f850a36c9\") " pod="openstack/root-account-create-update-kfdqt" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.798979 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7f94897-0613-499a-8390-cd1f850a36c9-operator-scripts\") pod \"root-account-create-update-kfdqt\" (UID: \"b7f94897-0613-499a-8390-cd1f850a36c9\") " pod="openstack/root-account-create-update-kfdqt" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.800648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7f94897-0613-499a-8390-cd1f850a36c9-operator-scripts\") pod \"root-account-create-update-kfdqt\" (UID: \"b7f94897-0613-499a-8390-cd1f850a36c9\") " pod="openstack/root-account-create-update-kfdqt" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.817083 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6ldd\" (UniqueName: \"kubernetes.io/projected/b7f94897-0613-499a-8390-cd1f850a36c9-kube-api-access-d6ldd\") pod \"root-account-create-update-kfdqt\" (UID: \"b7f94897-0613-499a-8390-cd1f850a36c9\") " pod="openstack/root-account-create-update-kfdqt" Jan 26 12:58:27 crc kubenswrapper[4881]: I0126 12:58:27.956893 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kfdqt" Jan 26 12:58:28 crc kubenswrapper[4881]: I0126 12:58:28.126739 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7337bc3f-e4be-4c3f-a2c1-61748291f085" path="/var/lib/kubelet/pods/7337bc3f-e4be-4c3f-a2c1-61748291f085/volumes" Jan 26 12:58:28 crc kubenswrapper[4881]: I0126 12:58:28.469076 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kfdqt"] Jan 26 12:58:28 crc kubenswrapper[4881]: I0126 12:58:28.489720 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 12:58:29 crc kubenswrapper[4881]: I0126 12:58:29.064981 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 12:58:29 crc kubenswrapper[4881]: I0126 12:58:29.373778 4881 generic.go:334] "Generic (PLEG): container finished" podID="b7f94897-0613-499a-8390-cd1f850a36c9" containerID="0a439d874dcffdca61343bbdbf25fac25e2ac6d8ea89b7838b2a6e891896aeee" exitCode=0 Jan 26 12:58:29 crc kubenswrapper[4881]: I0126 12:58:29.373822 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kfdqt" event={"ID":"b7f94897-0613-499a-8390-cd1f850a36c9","Type":"ContainerDied","Data":"0a439d874dcffdca61343bbdbf25fac25e2ac6d8ea89b7838b2a6e891896aeee"} Jan 26 12:58:29 crc kubenswrapper[4881]: I0126 12:58:29.373863 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kfdqt" event={"ID":"b7f94897-0613-499a-8390-cd1f850a36c9","Type":"ContainerStarted","Data":"4f463edba290d3ec5290c392b40137dc8d261c9dfc89a0d43bfa0255cec2d444"} Jan 26 12:58:29 crc kubenswrapper[4881]: I0126 12:58:29.748565 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:29 crc kubenswrapper[4881]: E0126 12:58:29.748748 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 12:58:29 crc kubenswrapper[4881]: E0126 12:58:29.748769 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 12:58:29 crc kubenswrapper[4881]: E0126 12:58:29.748824 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift podName:e234f178-5499-441d-923d-26a5a7cbfe04 nodeName:}" failed. No retries permitted until 2026-01-26 12:58:45.748808591 +0000 UTC m=+1398.228118617 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift") pod "swift-storage-0" (UID: "e234f178-5499-441d-923d-26a5a7cbfe04") : configmap "swift-ring-files" not found Jan 26 12:58:30 crc kubenswrapper[4881]: I0126 12:58:30.384229 4881 generic.go:334] "Generic (PLEG): container finished" podID="14ccca3b-65a8-4df1-9905-b21bfb24e5be" containerID="c248528e4842b1434de470c44bdba72be3e538197e04f83ddbbcec4722fed6b1" exitCode=0 Jan 26 12:58:30 crc kubenswrapper[4881]: I0126 12:58:30.384369 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dmkh9" event={"ID":"14ccca3b-65a8-4df1-9905-b21bfb24e5be","Type":"ContainerDied","Data":"c248528e4842b1434de470c44bdba72be3e538197e04f83ddbbcec4722fed6b1"} Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.183767 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kfdqt" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.188848 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.309978 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-combined-ca-bundle\") pod \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.310126 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-ring-data-devices\") pod \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.310174 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-dispersionconf\") pod \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.310291 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/14ccca3b-65a8-4df1-9905-b21bfb24e5be-etc-swift\") pod \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.310341 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7f94897-0613-499a-8390-cd1f850a36c9-operator-scripts\") pod \"b7f94897-0613-499a-8390-cd1f850a36c9\" (UID: \"b7f94897-0613-499a-8390-cd1f850a36c9\") " Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.310429 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-swiftconf\") pod \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.310464 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-969sk\" (UniqueName: \"kubernetes.io/projected/14ccca3b-65a8-4df1-9905-b21bfb24e5be-kube-api-access-969sk\") pod \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.310565 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-scripts\") pod \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\" (UID: \"14ccca3b-65a8-4df1-9905-b21bfb24e5be\") " Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.310603 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6ldd\" (UniqueName: \"kubernetes.io/projected/b7f94897-0613-499a-8390-cd1f850a36c9-kube-api-access-d6ldd\") pod \"b7f94897-0613-499a-8390-cd1f850a36c9\" (UID: \"b7f94897-0613-499a-8390-cd1f850a36c9\") " Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.310907 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "14ccca3b-65a8-4df1-9905-b21bfb24e5be" (UID: "14ccca3b-65a8-4df1-9905-b21bfb24e5be"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.311027 4881 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.311763 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f94897-0613-499a-8390-cd1f850a36c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7f94897-0613-499a-8390-cd1f850a36c9" (UID: "b7f94897-0613-499a-8390-cd1f850a36c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.312556 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14ccca3b-65a8-4df1-9905-b21bfb24e5be-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "14ccca3b-65a8-4df1-9905-b21bfb24e5be" (UID: "14ccca3b-65a8-4df1-9905-b21bfb24e5be"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.315108 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7f94897-0613-499a-8390-cd1f850a36c9-kube-api-access-d6ldd" (OuterVolumeSpecName: "kube-api-access-d6ldd") pod "b7f94897-0613-499a-8390-cd1f850a36c9" (UID: "b7f94897-0613-499a-8390-cd1f850a36c9"). InnerVolumeSpecName "kube-api-access-d6ldd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.315272 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ccca3b-65a8-4df1-9905-b21bfb24e5be-kube-api-access-969sk" (OuterVolumeSpecName: "kube-api-access-969sk") pod "14ccca3b-65a8-4df1-9905-b21bfb24e5be" (UID: "14ccca3b-65a8-4df1-9905-b21bfb24e5be"). InnerVolumeSpecName "kube-api-access-969sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.318199 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "14ccca3b-65a8-4df1-9905-b21bfb24e5be" (UID: "14ccca3b-65a8-4df1-9905-b21bfb24e5be"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.340287 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-scripts" (OuterVolumeSpecName: "scripts") pod "14ccca3b-65a8-4df1-9905-b21bfb24e5be" (UID: "14ccca3b-65a8-4df1-9905-b21bfb24e5be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.347880 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "14ccca3b-65a8-4df1-9905-b21bfb24e5be" (UID: "14ccca3b-65a8-4df1-9905-b21bfb24e5be"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.348763 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14ccca3b-65a8-4df1-9905-b21bfb24e5be" (UID: "14ccca3b-65a8-4df1-9905-b21bfb24e5be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.384829 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.412125 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rrqqp" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.412567 4881 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.412617 4881 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/14ccca3b-65a8-4df1-9905-b21bfb24e5be-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.412633 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7f94897-0613-499a-8390-cd1f850a36c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.412649 4881 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.412667 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-969sk\" (UniqueName: \"kubernetes.io/projected/14ccca3b-65a8-4df1-9905-b21bfb24e5be-kube-api-access-969sk\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.412716 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14ccca3b-65a8-4df1-9905-b21bfb24e5be-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.412735 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6ldd\" (UniqueName: \"kubernetes.io/projected/b7f94897-0613-499a-8390-cd1f850a36c9-kube-api-access-d6ldd\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.412753 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ccca3b-65a8-4df1-9905-b21bfb24e5be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.487389 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kfdqt" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.487867 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kfdqt" event={"ID":"b7f94897-0613-499a-8390-cd1f850a36c9","Type":"ContainerDied","Data":"4f463edba290d3ec5290c392b40137dc8d261c9dfc89a0d43bfa0255cec2d444"} Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.488014 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f463edba290d3ec5290c392b40137dc8d261c9dfc89a0d43bfa0255cec2d444" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.491577 4881 generic.go:334] "Generic (PLEG): container finished" podID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" containerID="b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9" exitCode=0 Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.491657 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb687a6e-7e1f-4697-8ab1-88ad03dd2951","Type":"ContainerDied","Data":"b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9"} Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.508165 4881 generic.go:334] "Generic (PLEG): container finished" podID="a455dd78-e351-449c-903a-5c0e0c50faf5" containerID="7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5" exitCode=0 Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.508685 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a455dd78-e351-449c-903a-5c0e0c50faf5","Type":"ContainerDied","Data":"7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5"} Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.514767 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dmkh9" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.515430 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dmkh9" event={"ID":"14ccca3b-65a8-4df1-9905-b21bfb24e5be","Type":"ContainerDied","Data":"e8cd01a0e674feb9ee6a26208d64331d49a1bdb062576cdda57e83d929d41ef4"} Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.515513 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8cd01a0e674feb9ee6a26208d64331d49a1bdb062576cdda57e83d929d41ef4" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.659997 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vs5xn-config-l6jd8"] Jan 26 12:58:36 crc kubenswrapper[4881]: E0126 12:58:36.662587 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f94897-0613-499a-8390-cd1f850a36c9" containerName="mariadb-account-create-update" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.662616 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f94897-0613-499a-8390-cd1f850a36c9" containerName="mariadb-account-create-update" Jan 26 12:58:36 crc kubenswrapper[4881]: E0126 12:58:36.662660 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ccca3b-65a8-4df1-9905-b21bfb24e5be" containerName="swift-ring-rebalance" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.662669 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ccca3b-65a8-4df1-9905-b21bfb24e5be" containerName="swift-ring-rebalance" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.662871 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f94897-0613-499a-8390-cd1f850a36c9" containerName="mariadb-account-create-update" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.662887 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ccca3b-65a8-4df1-9905-b21bfb24e5be" containerName="swift-ring-rebalance" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.663467 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.665400 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.677486 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vs5xn-config-l6jd8"] Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.819652 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-log-ovn\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.819933 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-scripts\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.820050 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.820166 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run-ovn\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.820301 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswtb\" (UniqueName: \"kubernetes.io/projected/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-kube-api-access-lswtb\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.820425 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-additional-scripts\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.922081 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-additional-scripts\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.922152 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-log-ovn\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.922179 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-scripts\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.922199 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.922229 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run-ovn\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.922284 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lswtb\" (UniqueName: \"kubernetes.io/projected/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-kube-api-access-lswtb\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.922790 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run-ovn\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.922811 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.922876 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-log-ovn\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.923452 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-additional-scripts\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.924894 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-scripts\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.944695 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lswtb\" (UniqueName: \"kubernetes.io/projected/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-kube-api-access-lswtb\") pod \"ovn-controller-vs5xn-config-l6jd8\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:36 crc kubenswrapper[4881]: I0126 12:58:36.998745 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.422349 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vs5xn-config-l6jd8"] Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.521699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vs5xn-config-l6jd8" event={"ID":"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083","Type":"ContainerStarted","Data":"cc80655504255ef8e93e0591b1b2c151c65ab0142cb60fc1cfb4c7898ce5d663"} Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.525259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a455dd78-e351-449c-903a-5c0e0c50faf5","Type":"ContainerStarted","Data":"cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d"} Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.525627 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.527648 4881 generic.go:334] "Generic (PLEG): container finished" podID="ab9a358b-8713-4790-a9c4-97b89efcc88f" containerID="e54d84c98843f6145cffc470d01c69913e0d35c69c2c3b4bf1d8ca15c6170cb3" exitCode=0 Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.527713 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"ab9a358b-8713-4790-a9c4-97b89efcc88f","Type":"ContainerDied","Data":"e54d84c98843f6145cffc470d01c69913e0d35c69c2c3b4bf1d8ca15c6170cb3"} Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.529775 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb687a6e-7e1f-4697-8ab1-88ad03dd2951","Type":"ContainerStarted","Data":"8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9"} Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.529973 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.535300 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1067dd91-d79f-4165-8c6e-e3309dff7d26","Type":"ContainerStarted","Data":"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c"} Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.551232 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.828375253 podStartE2EDuration="1m12.5512141s" podCreationTimestamp="2026-01-26 12:57:25 +0000 UTC" firstStartedPulling="2026-01-26 12:57:41.749995253 +0000 UTC m=+1334.229305279" lastFinishedPulling="2026-01-26 12:58:01.47283409 +0000 UTC m=+1353.952144126" observedRunningTime="2026-01-26 12:58:37.546686319 +0000 UTC m=+1390.025996375" watchObservedRunningTime="2026-01-26 12:58:37.5512141 +0000 UTC m=+1390.030524126" Jan 26 12:58:37 crc kubenswrapper[4881]: I0126 12:58:37.578470 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.42406081 podStartE2EDuration="1m12.578454344s" podCreationTimestamp="2026-01-26 12:57:25 +0000 UTC" firstStartedPulling="2026-01-26 12:57:42.318496087 +0000 UTC m=+1334.797806113" lastFinishedPulling="2026-01-26 12:58:01.472889621 +0000 UTC m=+1353.952199647" observedRunningTime="2026-01-26 12:58:37.572005746 +0000 UTC m=+1390.051315782" watchObservedRunningTime="2026-01-26 12:58:37.578454344 +0000 UTC m=+1390.057764370" Jan 26 12:58:39 crc kubenswrapper[4881]: I0126 12:58:39.071094 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kfdqt"] Jan 26 12:58:39 crc kubenswrapper[4881]: I0126 12:58:39.076958 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kfdqt"] Jan 26 12:58:40 crc kubenswrapper[4881]: I0126 12:58:40.096151 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7f94897-0613-499a-8390-cd1f850a36c9" path="/var/lib/kubelet/pods/b7f94897-0613-499a-8390-cd1f850a36c9/volumes" Jan 26 12:58:40 crc kubenswrapper[4881]: I0126 12:58:40.578675 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vs5xn-config-l6jd8" event={"ID":"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083","Type":"ContainerStarted","Data":"da38eba2ba81cd55099cfc1c7a17fc1b90857284abb696c7e5080c25a9bd0ea6"} Jan 26 12:58:40 crc kubenswrapper[4881]: I0126 12:58:40.582493 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"ab9a358b-8713-4790-a9c4-97b89efcc88f","Type":"ContainerStarted","Data":"1547919a1eff4e051a63e54401b185187c6e262681df42c8c2e432d6f2363d21"} Jan 26 12:58:40 crc kubenswrapper[4881]: I0126 12:58:40.583309 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:58:40 crc kubenswrapper[4881]: I0126 12:58:40.597383 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vs5xn-config-l6jd8" podStartSLOduration=4.597363497 podStartE2EDuration="4.597363497s" podCreationTimestamp="2026-01-26 12:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:40.592858818 +0000 UTC m=+1393.072168854" watchObservedRunningTime="2026-01-26 12:58:40.597363497 +0000 UTC m=+1393.076673523" Jan 26 12:58:40 crc kubenswrapper[4881]: I0126 12:58:40.624850 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=66.658739446 podStartE2EDuration="1m14.624829868s" podCreationTimestamp="2026-01-26 12:57:26 +0000 UTC" firstStartedPulling="2026-01-26 12:57:41.539224122 +0000 UTC m=+1334.018534148" lastFinishedPulling="2026-01-26 12:57:49.505314544 +0000 UTC m=+1341.984624570" observedRunningTime="2026-01-26 12:58:40.619480317 +0000 UTC m=+1393.098790353" watchObservedRunningTime="2026-01-26 12:58:40.624829868 +0000 UTC m=+1393.104139894" Jan 26 12:58:41 crc kubenswrapper[4881]: I0126 12:58:41.595256 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1067dd91-d79f-4165-8c6e-e3309dff7d26","Type":"ContainerStarted","Data":"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb"} Jan 26 12:58:41 crc kubenswrapper[4881]: I0126 12:58:41.598312 4881 generic.go:334] "Generic (PLEG): container finished" podID="be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" containerID="da38eba2ba81cd55099cfc1c7a17fc1b90857284abb696c7e5080c25a9bd0ea6" exitCode=0 Jan 26 12:58:41 crc kubenswrapper[4881]: I0126 12:58:41.598421 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vs5xn-config-l6jd8" event={"ID":"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083","Type":"ContainerDied","Data":"da38eba2ba81cd55099cfc1c7a17fc1b90857284abb696c7e5080c25a9bd0ea6"} Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.018986 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.129856 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lswtb\" (UniqueName: \"kubernetes.io/projected/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-kube-api-access-lswtb\") pod \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.129944 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run-ovn\") pod \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.130008 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run\") pod \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.130137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" (UID: "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.130146 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run" (OuterVolumeSpecName: "var-run") pod "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" (UID: "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.130325 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-additional-scripts\") pod \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.130393 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-log-ovn\") pod \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.130452 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-scripts\") pod \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\" (UID: \"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083\") " Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.130441 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" (UID: "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.131041 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" (UID: "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.131578 4881 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.131631 4881 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.131658 4881 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.131684 4881 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.132226 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-scripts" (OuterVolumeSpecName: "scripts") pod "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" (UID: "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.143393 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-kube-api-access-lswtb" (OuterVolumeSpecName: "kube-api-access-lswtb") pod "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" (UID: "be17ec7c-f6cf-4496-a7ad-ec0e6d89c083"). InnerVolumeSpecName "kube-api-access-lswtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.233463 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.233865 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lswtb\" (UniqueName: \"kubernetes.io/projected/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083-kube-api-access-lswtb\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.620237 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vs5xn-config-l6jd8" event={"ID":"be17ec7c-f6cf-4496-a7ad-ec0e6d89c083","Type":"ContainerDied","Data":"cc80655504255ef8e93e0591b1b2c151c65ab0142cb60fc1cfb4c7898ce5d663"} Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.620283 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc80655504255ef8e93e0591b1b2c151c65ab0142cb60fc1cfb4c7898ce5d663" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.620645 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vs5xn-config-l6jd8" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.730187 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vs5xn-config-l6jd8"] Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.740459 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vs5xn-config-l6jd8"] Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.850726 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vs5xn-config-q8q4c"] Jan 26 12:58:43 crc kubenswrapper[4881]: E0126 12:58:43.851050 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" containerName="ovn-config" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.851068 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" containerName="ovn-config" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.851230 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" containerName="ovn-config" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.851807 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.853750 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.872532 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vs5xn-config-q8q4c"] Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.949112 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run-ovn\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.949222 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzjbp\" (UniqueName: \"kubernetes.io/projected/e153e0db-55d5-482a-9f39-4f49b18045f9-kube-api-access-hzjbp\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.949246 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-log-ovn\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.949335 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-additional-scripts\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.949411 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:43 crc kubenswrapper[4881]: I0126 12:58:43.949546 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-scripts\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.051208 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzjbp\" (UniqueName: \"kubernetes.io/projected/e153e0db-55d5-482a-9f39-4f49b18045f9-kube-api-access-hzjbp\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.051299 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-log-ovn\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.051352 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-additional-scripts\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.051414 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.051555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-scripts\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.051664 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run-ovn\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.052164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run-ovn\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.052255 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.052258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-additional-scripts\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.052307 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-log-ovn\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.056291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-scripts\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.062052 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-47fkz"] Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.063313 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47fkz" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.065019 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.074139 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-47fkz"] Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.093413 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be17ec7c-f6cf-4496-a7ad-ec0e6d89c083" path="/var/lib/kubelet/pods/be17ec7c-f6cf-4496-a7ad-ec0e6d89c083/volumes" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.108779 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzjbp\" (UniqueName: \"kubernetes.io/projected/e153e0db-55d5-482a-9f39-4f49b18045f9-kube-api-access-hzjbp\") pod \"ovn-controller-vs5xn-config-q8q4c\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.153044 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttnmd\" (UniqueName: \"kubernetes.io/projected/12e90465-1c1a-409f-b312-85859d8b0a52-kube-api-access-ttnmd\") pod \"root-account-create-update-47fkz\" (UID: \"12e90465-1c1a-409f-b312-85859d8b0a52\") " pod="openstack/root-account-create-update-47fkz" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.153103 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12e90465-1c1a-409f-b312-85859d8b0a52-operator-scripts\") pod \"root-account-create-update-47fkz\" (UID: \"12e90465-1c1a-409f-b312-85859d8b0a52\") " pod="openstack/root-account-create-update-47fkz" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.165984 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.254994 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttnmd\" (UniqueName: \"kubernetes.io/projected/12e90465-1c1a-409f-b312-85859d8b0a52-kube-api-access-ttnmd\") pod \"root-account-create-update-47fkz\" (UID: \"12e90465-1c1a-409f-b312-85859d8b0a52\") " pod="openstack/root-account-create-update-47fkz" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.255059 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12e90465-1c1a-409f-b312-85859d8b0a52-operator-scripts\") pod \"root-account-create-update-47fkz\" (UID: \"12e90465-1c1a-409f-b312-85859d8b0a52\") " pod="openstack/root-account-create-update-47fkz" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.256254 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12e90465-1c1a-409f-b312-85859d8b0a52-operator-scripts\") pod \"root-account-create-update-47fkz\" (UID: \"12e90465-1c1a-409f-b312-85859d8b0a52\") " pod="openstack/root-account-create-update-47fkz" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.275969 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttnmd\" (UniqueName: \"kubernetes.io/projected/12e90465-1c1a-409f-b312-85859d8b0a52-kube-api-access-ttnmd\") pod \"root-account-create-update-47fkz\" (UID: \"12e90465-1c1a-409f-b312-85859d8b0a52\") " pod="openstack/root-account-create-update-47fkz" Jan 26 12:58:44 crc kubenswrapper[4881]: I0126 12:58:44.382584 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47fkz" Jan 26 12:58:45 crc kubenswrapper[4881]: I0126 12:58:45.770828 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vs5xn-config-q8q4c"] Jan 26 12:58:45 crc kubenswrapper[4881]: I0126 12:58:45.778165 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:45 crc kubenswrapper[4881]: I0126 12:58:45.785989 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e234f178-5499-441d-923d-26a5a7cbfe04-etc-swift\") pod \"swift-storage-0\" (UID: \"e234f178-5499-441d-923d-26a5a7cbfe04\") " pod="openstack/swift-storage-0" Jan 26 12:58:45 crc kubenswrapper[4881]: I0126 12:58:45.881579 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.040589 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-47fkz"] Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.303355 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vs5xn" Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.509555 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 12:58:46 crc kubenswrapper[4881]: W0126 12:58:46.520650 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode234f178_5499_441d_923d_26a5a7cbfe04.slice/crio-78ca476ac69d10e51ac6b9c1e4f2a09d623d0ced0d63a96a8de3f0b4b2a46e35 WatchSource:0}: Error finding container 78ca476ac69d10e51ac6b9c1e4f2a09d623d0ced0d63a96a8de3f0b4b2a46e35: Status 404 returned error can't find the container with id 78ca476ac69d10e51ac6b9c1e4f2a09d623d0ced0d63a96a8de3f0b4b2a46e35 Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.647595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"78ca476ac69d10e51ac6b9c1e4f2a09d623d0ced0d63a96a8de3f0b4b2a46e35"} Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.648969 4881 generic.go:334] "Generic (PLEG): container finished" podID="e153e0db-55d5-482a-9f39-4f49b18045f9" containerID="16c788f298df69084e3d5e43594a8f99ceaadc84ac8a99b306917fc30b8e4dda" exitCode=0 Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.649082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vs5xn-config-q8q4c" event={"ID":"e153e0db-55d5-482a-9f39-4f49b18045f9","Type":"ContainerDied","Data":"16c788f298df69084e3d5e43594a8f99ceaadc84ac8a99b306917fc30b8e4dda"} Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.649124 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vs5xn-config-q8q4c" event={"ID":"e153e0db-55d5-482a-9f39-4f49b18045f9","Type":"ContainerStarted","Data":"c96518af4b140852efbc141ad60d155b53f93a027d9eea1a81bb327b75e2522c"} Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.653666 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1067dd91-d79f-4165-8c6e-e3309dff7d26","Type":"ContainerStarted","Data":"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458"} Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.658253 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-47fkz" event={"ID":"12e90465-1c1a-409f-b312-85859d8b0a52","Type":"ContainerStarted","Data":"2a426c0f8f27813bb00129d76f29658873e5726cbe7e41c70fce685a597a0fb2"} Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.658300 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-47fkz" event={"ID":"12e90465-1c1a-409f-b312-85859d8b0a52","Type":"ContainerStarted","Data":"1cd56db7c04c3de733a0ce7bb4dbb3d01aed5d4ae8b33106084caaf04cd6bf12"} Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.701912 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.523482494 podStartE2EDuration="1m14.701893502s" podCreationTimestamp="2026-01-26 12:57:32 +0000 UTC" firstStartedPulling="2026-01-26 12:57:42.387791207 +0000 UTC m=+1334.867101233" lastFinishedPulling="2026-01-26 12:58:45.566202215 +0000 UTC m=+1398.045512241" observedRunningTime="2026-01-26 12:58:46.700688342 +0000 UTC m=+1399.179998378" watchObservedRunningTime="2026-01-26 12:58:46.701893502 +0000 UTC m=+1399.181203528" Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.721736 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-47fkz" podStartSLOduration=2.721716215 podStartE2EDuration="2.721716215s" podCreationTimestamp="2026-01-26 12:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:46.715982535 +0000 UTC m=+1399.195292561" watchObservedRunningTime="2026-01-26 12:58:46.721716215 +0000 UTC m=+1399.201026251" Jan 26 12:58:46 crc kubenswrapper[4881]: I0126 12:58:46.936595 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a455dd78-e351-449c-903a-5c0e0c50faf5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Jan 26 12:58:47 crc kubenswrapper[4881]: I0126 12:58:47.230775 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Jan 26 12:58:47 crc kubenswrapper[4881]: I0126 12:58:47.670928 4881 generic.go:334] "Generic (PLEG): container finished" podID="12e90465-1c1a-409f-b312-85859d8b0a52" containerID="2a426c0f8f27813bb00129d76f29658873e5726cbe7e41c70fce685a597a0fb2" exitCode=0 Jan 26 12:58:47 crc kubenswrapper[4881]: I0126 12:58:47.671013 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-47fkz" event={"ID":"12e90465-1c1a-409f-b312-85859d8b0a52","Type":"ContainerDied","Data":"2a426c0f8f27813bb00129d76f29658873e5726cbe7e41c70fce685a597a0fb2"} Jan 26 12:58:47 crc kubenswrapper[4881]: I0126 12:58:47.673395 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"30ff255c0306d37c82ea3b4c312f94bce2cecb2903dab5d9f48e87244eaf5d65"} Jan 26 12:58:47 crc kubenswrapper[4881]: I0126 12:58:47.673443 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"4e4d39b2a4e5dede56834d2661483c301a01594f0cf8a92ec9d52f6dbbcf7392"} Jan 26 12:58:47 crc kubenswrapper[4881]: I0126 12:58:47.673463 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"8d48fbec006bddf5a88afb832c7eaedc2e9c0d9e4eb9ebf6a5b001be54922289"} Jan 26 12:58:47 crc kubenswrapper[4881]: I0126 12:58:47.981073 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.028080 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-additional-scripts\") pod \"e153e0db-55d5-482a-9f39-4f49b18045f9\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.028118 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzjbp\" (UniqueName: \"kubernetes.io/projected/e153e0db-55d5-482a-9f39-4f49b18045f9-kube-api-access-hzjbp\") pod \"e153e0db-55d5-482a-9f39-4f49b18045f9\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.028144 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-scripts\") pod \"e153e0db-55d5-482a-9f39-4f49b18045f9\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.028287 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run\") pod \"e153e0db-55d5-482a-9f39-4f49b18045f9\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.028319 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run-ovn\") pod \"e153e0db-55d5-482a-9f39-4f49b18045f9\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.028358 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-log-ovn\") pod \"e153e0db-55d5-482a-9f39-4f49b18045f9\" (UID: \"e153e0db-55d5-482a-9f39-4f49b18045f9\") " Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.028829 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e153e0db-55d5-482a-9f39-4f49b18045f9" (UID: "e153e0db-55d5-482a-9f39-4f49b18045f9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.028853 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run" (OuterVolumeSpecName: "var-run") pod "e153e0db-55d5-482a-9f39-4f49b18045f9" (UID: "e153e0db-55d5-482a-9f39-4f49b18045f9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.028869 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e153e0db-55d5-482a-9f39-4f49b18045f9" (UID: "e153e0db-55d5-482a-9f39-4f49b18045f9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.029763 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-scripts" (OuterVolumeSpecName: "scripts") pod "e153e0db-55d5-482a-9f39-4f49b18045f9" (UID: "e153e0db-55d5-482a-9f39-4f49b18045f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.035443 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e153e0db-55d5-482a-9f39-4f49b18045f9-kube-api-access-hzjbp" (OuterVolumeSpecName: "kube-api-access-hzjbp") pod "e153e0db-55d5-482a-9f39-4f49b18045f9" (UID: "e153e0db-55d5-482a-9f39-4f49b18045f9"). InnerVolumeSpecName "kube-api-access-hzjbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.130193 4881 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.130225 4881 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.130237 4881 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e153e0db-55d5-482a-9f39-4f49b18045f9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.130250 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzjbp\" (UniqueName: \"kubernetes.io/projected/e153e0db-55d5-482a-9f39-4f49b18045f9-kube-api-access-hzjbp\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.130264 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.422453 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e153e0db-55d5-482a-9f39-4f49b18045f9" (UID: "e153e0db-55d5-482a-9f39-4f49b18045f9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.435851 4881 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e153e0db-55d5-482a-9f39-4f49b18045f9-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.681381 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vs5xn-config-q8q4c" event={"ID":"e153e0db-55d5-482a-9f39-4f49b18045f9","Type":"ContainerDied","Data":"c96518af4b140852efbc141ad60d155b53f93a027d9eea1a81bb327b75e2522c"} Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.681424 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c96518af4b140852efbc141ad60d155b53f93a027d9eea1a81bb327b75e2522c" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.681402 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vs5xn-config-q8q4c" Jan 26 12:58:48 crc kubenswrapper[4881]: I0126 12:58:48.959923 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47fkz" Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.037981 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.038017 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.051339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.051697 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12e90465-1c1a-409f-b312-85859d8b0a52-operator-scripts\") pod \"12e90465-1c1a-409f-b312-85859d8b0a52\" (UID: \"12e90465-1c1a-409f-b312-85859d8b0a52\") " Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.051783 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttnmd\" (UniqueName: \"kubernetes.io/projected/12e90465-1c1a-409f-b312-85859d8b0a52-kube-api-access-ttnmd\") pod \"12e90465-1c1a-409f-b312-85859d8b0a52\" (UID: \"12e90465-1c1a-409f-b312-85859d8b0a52\") " Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.052495 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e90465-1c1a-409f-b312-85859d8b0a52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12e90465-1c1a-409f-b312-85859d8b0a52" (UID: "12e90465-1c1a-409f-b312-85859d8b0a52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.059284 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e90465-1c1a-409f-b312-85859d8b0a52-kube-api-access-ttnmd" (OuterVolumeSpecName: "kube-api-access-ttnmd") pod "12e90465-1c1a-409f-b312-85859d8b0a52" (UID: "12e90465-1c1a-409f-b312-85859d8b0a52"). InnerVolumeSpecName "kube-api-access-ttnmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.065809 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vs5xn-config-q8q4c"] Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.074189 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vs5xn-config-q8q4c"] Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.154237 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttnmd\" (UniqueName: \"kubernetes.io/projected/12e90465-1c1a-409f-b312-85859d8b0a52-kube-api-access-ttnmd\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.154473 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12e90465-1c1a-409f-b312-85859d8b0a52-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.692300 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47fkz" Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.692611 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-47fkz" event={"ID":"12e90465-1c1a-409f-b312-85859d8b0a52","Type":"ContainerDied","Data":"1cd56db7c04c3de733a0ce7bb4dbb3d01aed5d4ae8b33106084caaf04cd6bf12"} Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.693025 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd56db7c04c3de733a0ce7bb4dbb3d01aed5d4ae8b33106084caaf04cd6bf12" Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.696760 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"b58dd85b6fe15cf4c0c952b4cbe6c480031012b268d5512199969aecd21ee74f"} Jan 26 12:58:49 crc kubenswrapper[4881]: I0126 12:58:49.697952 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:50 crc kubenswrapper[4881]: I0126 12:58:50.092447 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e153e0db-55d5-482a-9f39-4f49b18045f9" path="/var/lib/kubelet/pods/e153e0db-55d5-482a-9f39-4f49b18045f9/volumes" Jan 26 12:58:50 crc kubenswrapper[4881]: I0126 12:58:50.707465 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"7b6a90a760d6485c110827e8428a20ded92dd5ae2c1ffe1d6e8ac6448b8d3e4f"} Jan 26 12:58:50 crc kubenswrapper[4881]: I0126 12:58:50.707817 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"6714b56684206e5b5f4886e86b7f912a7b75e3ebd9747e97065a46425d976959"} Jan 26 12:58:50 crc kubenswrapper[4881]: I0126 12:58:50.707830 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"fa3f2952e69fd410117c44a84bbfafeacf5496dc078507489bd34a56d85ee581"} Jan 26 12:58:51 crc kubenswrapper[4881]: I0126 12:58:51.715914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"642791fb8b02b788523715f0255317514a2203f18d2512ad4f8d4efd3e4884c5"} Jan 26 12:58:51 crc kubenswrapper[4881]: I0126 12:58:51.716215 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"833547464ffdbc0d6b80eb4e4938bf962cf48a96daf1a7993e5ce0b2873e64e0"} Jan 26 12:58:51 crc kubenswrapper[4881]: I0126 12:58:51.716225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"2c7b6c0515362396b4a2b7eace825397566ecf222e3289cd34fc162595f8d718"} Jan 26 12:58:51 crc kubenswrapper[4881]: I0126 12:58:51.742386 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 12:58:51 crc kubenswrapper[4881]: I0126 12:58:51.742644 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="prometheus" containerID="cri-o://4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c" gracePeriod=600 Jan 26 12:58:51 crc kubenswrapper[4881]: I0126 12:58:51.742754 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="thanos-sidecar" containerID="cri-o://d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458" gracePeriod=600 Jan 26 12:58:51 crc kubenswrapper[4881]: I0126 12:58:51.742789 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="config-reloader" containerID="cri-o://d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb" gracePeriod=600 Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.658050 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.730162 4881 generic.go:334] "Generic (PLEG): container finished" podID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerID="d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458" exitCode=0 Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.730329 4881 generic.go:334] "Generic (PLEG): container finished" podID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerID="d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb" exitCode=0 Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.730291 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1067dd91-d79f-4165-8c6e-e3309dff7d26","Type":"ContainerDied","Data":"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458"} Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.730344 4881 generic.go:334] "Generic (PLEG): container finished" podID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerID="4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c" exitCode=0 Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.730526 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1067dd91-d79f-4165-8c6e-e3309dff7d26","Type":"ContainerDied","Data":"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb"} Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.730284 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.730596 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1067dd91-d79f-4165-8c6e-e3309dff7d26","Type":"ContainerDied","Data":"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c"} Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.730652 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1067dd91-d79f-4165-8c6e-e3309dff7d26","Type":"ContainerDied","Data":"22201f66f70578e050d3fee07513ec1a7bbf8de49212e92d0e86c3adfbd3a6a2"} Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.730674 4881 scope.go:117] "RemoveContainer" containerID="d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.742493 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"b9b8b45ce1be8d02ce55cd5b1cac688a13ed58f44564acf8225f6a08f7ad4819"} Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.742557 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"34e273843428671547896f028d750259e4f1a30a63d507aca67b130087ea7e6d"} Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.742572 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"ab62f5a89944496ff1ec5884c2267c2aa1be5ef8bb0129d3a979be679d974e23"} Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.742584 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"4c76a1540a764d0b8c15e7a88a78544dfd1a50bdf238a798e3b5c6f1f3dc6fef"} Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.750820 4881 scope.go:117] "RemoveContainer" containerID="d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.769490 4881 scope.go:117] "RemoveContainer" containerID="4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.786340 4881 scope.go:117] "RemoveContainer" containerID="ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.810064 4881 scope.go:117] "RemoveContainer" containerID="d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458" Jan 26 12:58:52 crc kubenswrapper[4881]: E0126 12:58:52.810700 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458\": container with ID starting with d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458 not found: ID does not exist" containerID="d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.810746 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458"} err="failed to get container status \"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458\": rpc error: code = NotFound desc = could not find container \"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458\": container with ID starting with d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458 not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.810776 4881 scope.go:117] "RemoveContainer" containerID="d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811188 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-tls-assets\") pod \"1067dd91-d79f-4165-8c6e-e3309dff7d26\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811369 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"1067dd91-d79f-4165-8c6e-e3309dff7d26\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " Jan 26 12:58:52 crc kubenswrapper[4881]: E0126 12:58:52.811407 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb\": container with ID starting with d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb not found: ID does not exist" containerID="d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811431 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb"} err="failed to get container status \"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb\": rpc error: code = NotFound desc = could not find container \"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb\": container with ID starting with d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811451 4881 scope.go:117] "RemoveContainer" containerID="4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811410 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5djds\" (UniqueName: \"kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-kube-api-access-5djds\") pod \"1067dd91-d79f-4165-8c6e-e3309dff7d26\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811577 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-web-config\") pod \"1067dd91-d79f-4165-8c6e-e3309dff7d26\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811630 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-1\") pod \"1067dd91-d79f-4165-8c6e-e3309dff7d26\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811681 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-config\") pod \"1067dd91-d79f-4165-8c6e-e3309dff7d26\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811701 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-2\") pod \"1067dd91-d79f-4165-8c6e-e3309dff7d26\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811728 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1067dd91-d79f-4165-8c6e-e3309dff7d26-config-out\") pod \"1067dd91-d79f-4165-8c6e-e3309dff7d26\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811744 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-thanos-prometheus-http-client-file\") pod \"1067dd91-d79f-4165-8c6e-e3309dff7d26\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.811798 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-0\") pod \"1067dd91-d79f-4165-8c6e-e3309dff7d26\" (UID: \"1067dd91-d79f-4165-8c6e-e3309dff7d26\") " Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.812850 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "1067dd91-d79f-4165-8c6e-e3309dff7d26" (UID: "1067dd91-d79f-4165-8c6e-e3309dff7d26"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.813176 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "1067dd91-d79f-4165-8c6e-e3309dff7d26" (UID: "1067dd91-d79f-4165-8c6e-e3309dff7d26"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:52 crc kubenswrapper[4881]: E0126 12:58:52.815279 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c\": container with ID starting with 4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c not found: ID does not exist" containerID="4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.815339 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c"} err="failed to get container status \"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c\": rpc error: code = NotFound desc = could not find container \"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c\": container with ID starting with 4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.815374 4881 scope.go:117] "RemoveContainer" containerID="ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.816005 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "1067dd91-d79f-4165-8c6e-e3309dff7d26" (UID: "1067dd91-d79f-4165-8c6e-e3309dff7d26"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:52 crc kubenswrapper[4881]: E0126 12:58:52.816131 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5\": container with ID starting with ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5 not found: ID does not exist" containerID="ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.816167 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5"} err="failed to get container status \"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5\": rpc error: code = NotFound desc = could not find container \"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5\": container with ID starting with ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5 not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.816194 4881 scope.go:117] "RemoveContainer" containerID="d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.817713 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-config" (OuterVolumeSpecName: "config") pod "1067dd91-d79f-4165-8c6e-e3309dff7d26" (UID: "1067dd91-d79f-4165-8c6e-e3309dff7d26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.817766 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458"} err="failed to get container status \"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458\": rpc error: code = NotFound desc = could not find container \"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458\": container with ID starting with d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458 not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.817821 4881 scope.go:117] "RemoveContainer" containerID="d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.818234 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb"} err="failed to get container status \"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb\": rpc error: code = NotFound desc = could not find container \"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb\": container with ID starting with d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.818276 4881 scope.go:117] "RemoveContainer" containerID="4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.818402 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1067dd91-d79f-4165-8c6e-e3309dff7d26" (UID: "1067dd91-d79f-4165-8c6e-e3309dff7d26"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.818787 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c"} err="failed to get container status \"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c\": rpc error: code = NotFound desc = could not find container \"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c\": container with ID starting with 4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.818830 4881 scope.go:117] "RemoveContainer" containerID="ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.818373 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-kube-api-access-5djds" (OuterVolumeSpecName: "kube-api-access-5djds") pod "1067dd91-d79f-4165-8c6e-e3309dff7d26" (UID: "1067dd91-d79f-4165-8c6e-e3309dff7d26"). InnerVolumeSpecName "kube-api-access-5djds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.818938 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1067dd91-d79f-4165-8c6e-e3309dff7d26-config-out" (OuterVolumeSpecName: "config-out") pod "1067dd91-d79f-4165-8c6e-e3309dff7d26" (UID: "1067dd91-d79f-4165-8c6e-e3309dff7d26"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.820686 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "1067dd91-d79f-4165-8c6e-e3309dff7d26" (UID: "1067dd91-d79f-4165-8c6e-e3309dff7d26"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.820684 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5"} err="failed to get container status \"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5\": rpc error: code = NotFound desc = could not find container \"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5\": container with ID starting with ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5 not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.820730 4881 scope.go:117] "RemoveContainer" containerID="d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.822620 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458"} err="failed to get container status \"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458\": rpc error: code = NotFound desc = could not find container \"d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458\": container with ID starting with d2ef00e90a9d09224a86dc1444656f015963fba4be848df1601aff43ecaba458 not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.822677 4881 scope.go:117] "RemoveContainer" containerID="d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.823025 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb"} err="failed to get container status \"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb\": rpc error: code = NotFound desc = could not find container \"d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb\": container with ID starting with d4e557f9801517a27515a8d95e6c3f03794018674779270da8d11e37c4006dfb not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.823054 4881 scope.go:117] "RemoveContainer" containerID="4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.823368 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c"} err="failed to get container status \"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c\": rpc error: code = NotFound desc = could not find container \"4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c\": container with ID starting with 4f43b7a40be6eb3fa5a35441b987eb48e2215bc576da15449aad0c3ce35f993c not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.823386 4881 scope.go:117] "RemoveContainer" containerID="ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.823747 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5"} err="failed to get container status \"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5\": rpc error: code = NotFound desc = could not find container \"ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5\": container with ID starting with ecf38f70403187b14b9c1b17fa436d2edc91256a449260f1599234f5dc47eda5 not found: ID does not exist" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.831073 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "1067dd91-d79f-4165-8c6e-e3309dff7d26" (UID: "1067dd91-d79f-4165-8c6e-e3309dff7d26"). InnerVolumeSpecName "pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.848895 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-web-config" (OuterVolumeSpecName: "web-config") pod "1067dd91-d79f-4165-8c6e-e3309dff7d26" (UID: "1067dd91-d79f-4165-8c6e-e3309dff7d26"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.913739 4881 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.913807 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") on node \"crc\" " Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.913821 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5djds\" (UniqueName: \"kubernetes.io/projected/1067dd91-d79f-4165-8c6e-e3309dff7d26-kube-api-access-5djds\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.913834 4881 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.913844 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.913853 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.913862 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.913870 4881 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1067dd91-d79f-4165-8c6e-e3309dff7d26-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.913878 4881 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1067dd91-d79f-4165-8c6e-e3309dff7d26-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.913888 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1067dd91-d79f-4165-8c6e-e3309dff7d26-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.936006 4881 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 12:58:52 crc kubenswrapper[4881]: I0126 12:58:52.936153 4881 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4") on node "crc" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.015506 4881 reconciler_common.go:293] "Volume detached for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.062825 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.071192 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.103719 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 12:58:53 crc kubenswrapper[4881]: E0126 12:58:53.104014 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="thanos-sidecar" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104029 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="thanos-sidecar" Jan 26 12:58:53 crc kubenswrapper[4881]: E0126 12:58:53.104040 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e90465-1c1a-409f-b312-85859d8b0a52" containerName="mariadb-account-create-update" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104046 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e90465-1c1a-409f-b312-85859d8b0a52" containerName="mariadb-account-create-update" Jan 26 12:58:53 crc kubenswrapper[4881]: E0126 12:58:53.104069 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="config-reloader" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104084 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="config-reloader" Jan 26 12:58:53 crc kubenswrapper[4881]: E0126 12:58:53.104095 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="init-config-reloader" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104101 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="init-config-reloader" Jan 26 12:58:53 crc kubenswrapper[4881]: E0126 12:58:53.104112 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="prometheus" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104118 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="prometheus" Jan 26 12:58:53 crc kubenswrapper[4881]: E0126 12:58:53.104134 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e153e0db-55d5-482a-9f39-4f49b18045f9" containerName="ovn-config" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104140 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e153e0db-55d5-482a-9f39-4f49b18045f9" containerName="ovn-config" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104298 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="thanos-sidecar" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104314 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e90465-1c1a-409f-b312-85859d8b0a52" containerName="mariadb-account-create-update" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104322 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="config-reloader" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104333 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e153e0db-55d5-482a-9f39-4f49b18045f9" containerName="ovn-config" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.104341 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" containerName="prometheus" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.105711 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.108065 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2qlwl" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.108580 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.108606 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.108671 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.109021 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.109032 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.109450 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.109593 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.118437 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.151419 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.217606 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.217666 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-config\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.217795 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.217816 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.217849 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.217876 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.218044 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.218102 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.218170 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.218202 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.218263 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j8rj\" (UniqueName: \"kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-kube-api-access-8j8rj\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.218355 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.218389 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a0783093-5301-4381-adfe-dc3d027975f8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320463 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320544 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320582 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320657 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320674 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320699 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320716 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320747 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j8rj\" (UniqueName: \"kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-kube-api-access-8j8rj\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320776 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320794 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a0783093-5301-4381-adfe-dc3d027975f8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320833 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.320860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-config\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.321317 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.322020 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.322150 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.325613 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.326166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.326183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.326343 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a0783093-5301-4381-adfe-dc3d027975f8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.329215 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.331191 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.331334 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/68d2c87ef14797ce11fba4e65263a740afb8b7e8fd7775f7168ab753beb0af09/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.331354 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-config\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.333227 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.338395 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.340493 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j8rj\" (UniqueName: \"kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-kube-api-access-8j8rj\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.364772 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.423020 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.776787 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e234f178-5499-441d-923d-26a5a7cbfe04","Type":"ContainerStarted","Data":"b625f21182b4e64acb990f59dd95a121006c1c8c70422afda247a45f4706fcdc"} Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.827709 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.950568681 podStartE2EDuration="41.827683022s" podCreationTimestamp="2026-01-26 12:58:12 +0000 UTC" firstStartedPulling="2026-01-26 12:58:46.524428424 +0000 UTC m=+1399.003738450" lastFinishedPulling="2026-01-26 12:58:51.401542775 +0000 UTC m=+1403.880852791" observedRunningTime="2026-01-26 12:58:53.814958322 +0000 UTC m=+1406.294268388" watchObservedRunningTime="2026-01-26 12:58:53.827683022 +0000 UTC m=+1406.306993088" Jan 26 12:58:53 crc kubenswrapper[4881]: I0126 12:58:53.889253 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 12:58:53 crc kubenswrapper[4881]: W0126 12:58:53.889273 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0783093_5301_4381_adfe_dc3d027975f8.slice/crio-01217105e6b821489d53710013d1ec00a6322097c722af2e992c2855af9435dc WatchSource:0}: Error finding container 01217105e6b821489d53710013d1ec00a6322097c722af2e992c2855af9435dc: Status 404 returned error can't find the container with id 01217105e6b821489d53710013d1ec00a6322097c722af2e992c2855af9435dc Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.103145 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1067dd91-d79f-4165-8c6e-e3309dff7d26" path="/var/lib/kubelet/pods/1067dd91-d79f-4165-8c6e-e3309dff7d26/volumes" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.366684 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-smrcs"] Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.368162 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.369624 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.377776 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-smrcs"] Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.438778 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-svc\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.439048 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xppx9\" (UniqueName: \"kubernetes.io/projected/c3adacf9-6f24-44e9-a3f7-03082a2159fd-kube-api-access-xppx9\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.439262 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-swift-storage-0\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.439352 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-nb\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.439388 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-sb\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.439480 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-config\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.540873 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-svc\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.541185 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xppx9\" (UniqueName: \"kubernetes.io/projected/c3adacf9-6f24-44e9-a3f7-03082a2159fd-kube-api-access-xppx9\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.541339 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-swift-storage-0\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.542027 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-nb\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.542654 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-sb\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.541640 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-svc\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.542606 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-nb\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.541969 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-swift-storage-0\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.543305 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-sb\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.543720 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-config\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.544296 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-config\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.566242 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xppx9\" (UniqueName: \"kubernetes.io/projected/c3adacf9-6f24-44e9-a3f7-03082a2159fd-kube-api-access-xppx9\") pod \"dnsmasq-dns-757cc9679f-smrcs\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.784205 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a0783093-5301-4381-adfe-dc3d027975f8","Type":"ContainerStarted","Data":"01217105e6b821489d53710013d1ec00a6322097c722af2e992c2855af9435dc"} Jan 26 12:58:54 crc kubenswrapper[4881]: I0126 12:58:54.815329 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:55 crc kubenswrapper[4881]: I0126 12:58:55.253508 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-smrcs"] Jan 26 12:58:55 crc kubenswrapper[4881]: W0126 12:58:55.257323 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3adacf9_6f24_44e9_a3f7_03082a2159fd.slice/crio-178a1aa52049d490578b7f7ba98ec625d5199960ab9b40e900a7b644672c70a5 WatchSource:0}: Error finding container 178a1aa52049d490578b7f7ba98ec625d5199960ab9b40e900a7b644672c70a5: Status 404 returned error can't find the container with id 178a1aa52049d490578b7f7ba98ec625d5199960ab9b40e900a7b644672c70a5 Jan 26 12:58:55 crc kubenswrapper[4881]: I0126 12:58:55.796236 4881 generic.go:334] "Generic (PLEG): container finished" podID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerID="cb6ff48452ddc510a6d0103310f496fdab02d2226d184214408ef16096f16a93" exitCode=0 Jan 26 12:58:55 crc kubenswrapper[4881]: I0126 12:58:55.796299 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" event={"ID":"c3adacf9-6f24-44e9-a3f7-03082a2159fd","Type":"ContainerDied","Data":"cb6ff48452ddc510a6d0103310f496fdab02d2226d184214408ef16096f16a93"} Jan 26 12:58:55 crc kubenswrapper[4881]: I0126 12:58:55.796568 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" event={"ID":"c3adacf9-6f24-44e9-a3f7-03082a2159fd","Type":"ContainerStarted","Data":"178a1aa52049d490578b7f7ba98ec625d5199960ab9b40e900a7b644672c70a5"} Jan 26 12:58:56 crc kubenswrapper[4881]: I0126 12:58:56.805945 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a0783093-5301-4381-adfe-dc3d027975f8","Type":"ContainerStarted","Data":"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461"} Jan 26 12:58:56 crc kubenswrapper[4881]: I0126 12:58:56.808259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" event={"ID":"c3adacf9-6f24-44e9-a3f7-03082a2159fd","Type":"ContainerStarted","Data":"e2c6cb6b4e269432b3aa2a4e73ac2efd12af9da145fe3ce4731ea908064179fc"} Jan 26 12:58:56 crc kubenswrapper[4881]: I0126 12:58:56.808415 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:58:56 crc kubenswrapper[4881]: I0126 12:58:56.866787 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" podStartSLOduration=2.866772268 podStartE2EDuration="2.866772268s" podCreationTimestamp="2026-01-26 12:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:56.863530659 +0000 UTC m=+1409.342840715" watchObservedRunningTime="2026-01-26 12:58:56.866772268 +0000 UTC m=+1409.346082294" Jan 26 12:58:56 crc kubenswrapper[4881]: I0126 12:58:56.935766 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.228354 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.284816 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kzz6j"] Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.285811 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kzz6j" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.307286 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kzz6j"] Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.385547 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-87bfw"] Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.386500 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-87bfw" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.390588 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4jps\" (UniqueName: \"kubernetes.io/projected/79b83426-8fd6-49cd-8788-b4f7c0bb2216-kube-api-access-r4jps\") pod \"cinder-db-create-kzz6j\" (UID: \"79b83426-8fd6-49cd-8788-b4f7c0bb2216\") " pod="openstack/cinder-db-create-kzz6j" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.390672 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b83426-8fd6-49cd-8788-b4f7c0bb2216-operator-scripts\") pod \"cinder-db-create-kzz6j\" (UID: \"79b83426-8fd6-49cd-8788-b4f7c0bb2216\") " pod="openstack/cinder-db-create-kzz6j" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.404371 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-87bfw"] Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.410746 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-24db-account-create-update-2mfcv"] Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.411765 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-24db-account-create-update-2mfcv" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.415094 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.440097 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-24db-account-create-update-2mfcv"] Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.492223 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b83426-8fd6-49cd-8788-b4f7c0bb2216-operator-scripts\") pod \"cinder-db-create-kzz6j\" (UID: \"79b83426-8fd6-49cd-8788-b4f7c0bb2216\") " pod="openstack/cinder-db-create-kzz6j" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.492295 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7jxr\" (UniqueName: \"kubernetes.io/projected/8a8b530f-4ae9-45a2-9a70-bba160dec46c-kube-api-access-w7jxr\") pod \"cinder-24db-account-create-update-2mfcv\" (UID: \"8a8b530f-4ae9-45a2-9a70-bba160dec46c\") " pod="openstack/cinder-24db-account-create-update-2mfcv" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.492336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbq2\" (UniqueName: \"kubernetes.io/projected/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-kube-api-access-qxbq2\") pod \"barbican-db-create-87bfw\" (UID: \"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329\") " pod="openstack/barbican-db-create-87bfw" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.492440 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4jps\" (UniqueName: \"kubernetes.io/projected/79b83426-8fd6-49cd-8788-b4f7c0bb2216-kube-api-access-r4jps\") pod \"cinder-db-create-kzz6j\" (UID: \"79b83426-8fd6-49cd-8788-b4f7c0bb2216\") " pod="openstack/cinder-db-create-kzz6j" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.492483 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a8b530f-4ae9-45a2-9a70-bba160dec46c-operator-scripts\") pod \"cinder-24db-account-create-update-2mfcv\" (UID: \"8a8b530f-4ae9-45a2-9a70-bba160dec46c\") " pod="openstack/cinder-24db-account-create-update-2mfcv" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.492505 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-operator-scripts\") pod \"barbican-db-create-87bfw\" (UID: \"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329\") " pod="openstack/barbican-db-create-87bfw" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.492961 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b83426-8fd6-49cd-8788-b4f7c0bb2216-operator-scripts\") pod \"cinder-db-create-kzz6j\" (UID: \"79b83426-8fd6-49cd-8788-b4f7c0bb2216\") " pod="openstack/cinder-db-create-kzz6j" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.493637 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-96f0-account-create-update-lqrtl"] Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.494662 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-96f0-account-create-update-lqrtl" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.498358 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.501264 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-96f0-account-create-update-lqrtl"] Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.521914 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="ab9a358b-8713-4790-a9c4-97b89efcc88f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.549692 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4jps\" (UniqueName: \"kubernetes.io/projected/79b83426-8fd6-49cd-8788-b4f7c0bb2216-kube-api-access-r4jps\") pod \"cinder-db-create-kzz6j\" (UID: \"79b83426-8fd6-49cd-8788-b4f7c0bb2216\") " pod="openstack/cinder-db-create-kzz6j" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.594428 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a8b530f-4ae9-45a2-9a70-bba160dec46c-operator-scripts\") pod \"cinder-24db-account-create-update-2mfcv\" (UID: \"8a8b530f-4ae9-45a2-9a70-bba160dec46c\") " pod="openstack/cinder-24db-account-create-update-2mfcv" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.594492 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-operator-scripts\") pod \"barbican-db-create-87bfw\" (UID: \"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329\") " pod="openstack/barbican-db-create-87bfw" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.594554 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7jxr\" (UniqueName: \"kubernetes.io/projected/8a8b530f-4ae9-45a2-9a70-bba160dec46c-kube-api-access-w7jxr\") pod \"cinder-24db-account-create-update-2mfcv\" (UID: \"8a8b530f-4ae9-45a2-9a70-bba160dec46c\") " pod="openstack/cinder-24db-account-create-update-2mfcv" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.594583 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ac0661-2a31-41bb-9dad-adee7c8dddf5-operator-scripts\") pod \"barbican-96f0-account-create-update-lqrtl\" (UID: \"27ac0661-2a31-41bb-9dad-adee7c8dddf5\") " pod="openstack/barbican-96f0-account-create-update-lqrtl" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.594600 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5frf8\" (UniqueName: \"kubernetes.io/projected/27ac0661-2a31-41bb-9dad-adee7c8dddf5-kube-api-access-5frf8\") pod \"barbican-96f0-account-create-update-lqrtl\" (UID: \"27ac0661-2a31-41bb-9dad-adee7c8dddf5\") " pod="openstack/barbican-96f0-account-create-update-lqrtl" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.594624 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbq2\" (UniqueName: \"kubernetes.io/projected/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-kube-api-access-qxbq2\") pod \"barbican-db-create-87bfw\" (UID: \"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329\") " pod="openstack/barbican-db-create-87bfw" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.595288 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a8b530f-4ae9-45a2-9a70-bba160dec46c-operator-scripts\") pod \"cinder-24db-account-create-update-2mfcv\" (UID: \"8a8b530f-4ae9-45a2-9a70-bba160dec46c\") " pod="openstack/cinder-24db-account-create-update-2mfcv" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.595380 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-operator-scripts\") pod \"barbican-db-create-87bfw\" (UID: \"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329\") " pod="openstack/barbican-db-create-87bfw" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.602983 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kzz6j" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.619134 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbq2\" (UniqueName: \"kubernetes.io/projected/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-kube-api-access-qxbq2\") pod \"barbican-db-create-87bfw\" (UID: \"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329\") " pod="openstack/barbican-db-create-87bfw" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.637080 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7jxr\" (UniqueName: \"kubernetes.io/projected/8a8b530f-4ae9-45a2-9a70-bba160dec46c-kube-api-access-w7jxr\") pod \"cinder-24db-account-create-update-2mfcv\" (UID: \"8a8b530f-4ae9-45a2-9a70-bba160dec46c\") " pod="openstack/cinder-24db-account-create-update-2mfcv" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.697502 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ac0661-2a31-41bb-9dad-adee7c8dddf5-operator-scripts\") pod \"barbican-96f0-account-create-update-lqrtl\" (UID: \"27ac0661-2a31-41bb-9dad-adee7c8dddf5\") " pod="openstack/barbican-96f0-account-create-update-lqrtl" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.697626 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5frf8\" (UniqueName: \"kubernetes.io/projected/27ac0661-2a31-41bb-9dad-adee7c8dddf5-kube-api-access-5frf8\") pod \"barbican-96f0-account-create-update-lqrtl\" (UID: \"27ac0661-2a31-41bb-9dad-adee7c8dddf5\") " pod="openstack/barbican-96f0-account-create-update-lqrtl" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.698367 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ac0661-2a31-41bb-9dad-adee7c8dddf5-operator-scripts\") pod \"barbican-96f0-account-create-update-lqrtl\" (UID: \"27ac0661-2a31-41bb-9dad-adee7c8dddf5\") " pod="openstack/barbican-96f0-account-create-update-lqrtl" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.699488 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-87bfw" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.725129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5frf8\" (UniqueName: \"kubernetes.io/projected/27ac0661-2a31-41bb-9dad-adee7c8dddf5-kube-api-access-5frf8\") pod \"barbican-96f0-account-create-update-lqrtl\" (UID: \"27ac0661-2a31-41bb-9dad-adee7c8dddf5\") " pod="openstack/barbican-96f0-account-create-update-lqrtl" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.731695 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-24db-account-create-update-2mfcv" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.766061 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x5w44"] Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.767286 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.773462 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.774229 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.774436 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p4c6x" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.774637 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.787737 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x5w44"] Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.804844 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-config-data\") pod \"keystone-db-sync-x5w44\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.804978 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-combined-ca-bundle\") pod \"keystone-db-sync-x5w44\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.805023 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf4z9\" (UniqueName: \"kubernetes.io/projected/817c084d-f62f-49b2-8482-e37c799af743-kube-api-access-sf4z9\") pod \"keystone-db-sync-x5w44\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.810549 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-96f0-account-create-update-lqrtl" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.906655 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-combined-ca-bundle\") pod \"keystone-db-sync-x5w44\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.906732 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf4z9\" (UniqueName: \"kubernetes.io/projected/817c084d-f62f-49b2-8482-e37c799af743-kube-api-access-sf4z9\") pod \"keystone-db-sync-x5w44\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.906800 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-config-data\") pod \"keystone-db-sync-x5w44\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.917157 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-combined-ca-bundle\") pod \"keystone-db-sync-x5w44\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.918048 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-config-data\") pod \"keystone-db-sync-x5w44\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:57 crc kubenswrapper[4881]: I0126 12:58:57.931080 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf4z9\" (UniqueName: \"kubernetes.io/projected/817c084d-f62f-49b2-8482-e37c799af743-kube-api-access-sf4z9\") pod \"keystone-db-sync-x5w44\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.123638 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x5w44" Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.188379 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kzz6j"] Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.335161 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-87bfw"] Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.366222 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-96f0-account-create-update-lqrtl"] Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.397397 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-24db-account-create-update-2mfcv"] Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.417945 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x5w44"] Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.861744 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-24db-account-create-update-2mfcv" event={"ID":"8a8b530f-4ae9-45a2-9a70-bba160dec46c","Type":"ContainerStarted","Data":"fe28d77e3a215a8b83779ab2b04599b91d3b63debd71596cf507483db235d1d1"} Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.861965 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-24db-account-create-update-2mfcv" event={"ID":"8a8b530f-4ae9-45a2-9a70-bba160dec46c","Type":"ContainerStarted","Data":"05e7b0a7dd0500fa6e198d5b2373d85c4a2d08635b65066fc94a88be9211bbde"} Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.911263 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kzz6j" event={"ID":"79b83426-8fd6-49cd-8788-b4f7c0bb2216","Type":"ContainerStarted","Data":"43de24a7d1b0034ce8c703e1f91de72dda23259326e9e287d3de168b7104d5c3"} Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.911313 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kzz6j" event={"ID":"79b83426-8fd6-49cd-8788-b4f7c0bb2216","Type":"ContainerStarted","Data":"720d5c370e6081f2352af18879a942c2112e693776e18a1cd1379f96e4db05ea"} Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.916561 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-87bfw" event={"ID":"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329","Type":"ContainerStarted","Data":"f70b08bc1805fc4ccd041611f50ce229a8150aded2efe88707bab5a05b6aee68"} Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.916623 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-87bfw" event={"ID":"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329","Type":"ContainerStarted","Data":"8085518c41c61e63e7553abe6ca0266d62934d2a7ae59a65ee203bcf99ad666b"} Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.920366 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-96f0-account-create-update-lqrtl" event={"ID":"27ac0661-2a31-41bb-9dad-adee7c8dddf5","Type":"ContainerStarted","Data":"f7814b853c18f408a846236c0f55207ac7fe2c9e1d2dcf4cfb564c03ac5621dc"} Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.920439 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-96f0-account-create-update-lqrtl" event={"ID":"27ac0661-2a31-41bb-9dad-adee7c8dddf5","Type":"ContainerStarted","Data":"efe6a71190c8c7dbf25fdaa52d2a1886688533fe9aa3fb0c57af23985d0a05fd"} Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.922592 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x5w44" event={"ID":"817c084d-f62f-49b2-8482-e37c799af743","Type":"ContainerStarted","Data":"6477f309805a1f1b0b12d5ab575f9f79bd5fdc58bae3d6788dbcb9919efb8611"} Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.941993 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-kzz6j" podStartSLOduration=1.941975207 podStartE2EDuration="1.941975207s" podCreationTimestamp="2026-01-26 12:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:58.936985266 +0000 UTC m=+1411.416295292" watchObservedRunningTime="2026-01-26 12:58:58.941975207 +0000 UTC m=+1411.421285233" Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.943250 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-24db-account-create-update-2mfcv" podStartSLOduration=1.943243528 podStartE2EDuration="1.943243528s" podCreationTimestamp="2026-01-26 12:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:58.893084075 +0000 UTC m=+1411.372394101" watchObservedRunningTime="2026-01-26 12:58:58.943243528 +0000 UTC m=+1411.422553554" Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.967300 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-96f0-account-create-update-lqrtl" podStartSLOduration=1.9672833939999999 podStartE2EDuration="1.967283394s" podCreationTimestamp="2026-01-26 12:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:58.952698999 +0000 UTC m=+1411.432009025" watchObservedRunningTime="2026-01-26 12:58:58.967283394 +0000 UTC m=+1411.446593410" Jan 26 12:58:58 crc kubenswrapper[4881]: I0126 12:58:58.988801 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-87bfw" podStartSLOduration=1.988779649 podStartE2EDuration="1.988779649s" podCreationTimestamp="2026-01-26 12:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:58.97119954 +0000 UTC m=+1411.450509576" watchObservedRunningTime="2026-01-26 12:58:58.988779649 +0000 UTC m=+1411.468089675" Jan 26 12:58:59 crc kubenswrapper[4881]: I0126 12:58:59.934541 4881 generic.go:334] "Generic (PLEG): container finished" podID="413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329" containerID="f70b08bc1805fc4ccd041611f50ce229a8150aded2efe88707bab5a05b6aee68" exitCode=0 Jan 26 12:58:59 crc kubenswrapper[4881]: I0126 12:58:59.934731 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-87bfw" event={"ID":"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329","Type":"ContainerDied","Data":"f70b08bc1805fc4ccd041611f50ce229a8150aded2efe88707bab5a05b6aee68"} Jan 26 12:58:59 crc kubenswrapper[4881]: I0126 12:58:59.938805 4881 generic.go:334] "Generic (PLEG): container finished" podID="27ac0661-2a31-41bb-9dad-adee7c8dddf5" containerID="f7814b853c18f408a846236c0f55207ac7fe2c9e1d2dcf4cfb564c03ac5621dc" exitCode=0 Jan 26 12:58:59 crc kubenswrapper[4881]: I0126 12:58:59.938878 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-96f0-account-create-update-lqrtl" event={"ID":"27ac0661-2a31-41bb-9dad-adee7c8dddf5","Type":"ContainerDied","Data":"f7814b853c18f408a846236c0f55207ac7fe2c9e1d2dcf4cfb564c03ac5621dc"} Jan 26 12:58:59 crc kubenswrapper[4881]: I0126 12:58:59.940322 4881 generic.go:334] "Generic (PLEG): container finished" podID="8a8b530f-4ae9-45a2-9a70-bba160dec46c" containerID="fe28d77e3a215a8b83779ab2b04599b91d3b63debd71596cf507483db235d1d1" exitCode=0 Jan 26 12:58:59 crc kubenswrapper[4881]: I0126 12:58:59.940366 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-24db-account-create-update-2mfcv" event={"ID":"8a8b530f-4ae9-45a2-9a70-bba160dec46c","Type":"ContainerDied","Data":"fe28d77e3a215a8b83779ab2b04599b91d3b63debd71596cf507483db235d1d1"} Jan 26 12:58:59 crc kubenswrapper[4881]: I0126 12:58:59.942205 4881 generic.go:334] "Generic (PLEG): container finished" podID="79b83426-8fd6-49cd-8788-b4f7c0bb2216" containerID="43de24a7d1b0034ce8c703e1f91de72dda23259326e9e287d3de168b7104d5c3" exitCode=0 Jan 26 12:58:59 crc kubenswrapper[4881]: I0126 12:58:59.942327 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kzz6j" event={"ID":"79b83426-8fd6-49cd-8788-b4f7c0bb2216","Type":"ContainerDied","Data":"43de24a7d1b0034ce8c703e1f91de72dda23259326e9e287d3de168b7104d5c3"} Jan 26 12:59:02 crc kubenswrapper[4881]: I0126 12:59:02.973815 4881 generic.go:334] "Generic (PLEG): container finished" podID="a0783093-5301-4381-adfe-dc3d027975f8" containerID="0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461" exitCode=0 Jan 26 12:59:02 crc kubenswrapper[4881]: I0126 12:59:02.974545 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a0783093-5301-4381-adfe-dc3d027975f8","Type":"ContainerDied","Data":"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461"} Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.582875 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-24db-account-create-update-2mfcv" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.604475 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-87bfw" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.610067 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7jxr\" (UniqueName: \"kubernetes.io/projected/8a8b530f-4ae9-45a2-9a70-bba160dec46c-kube-api-access-w7jxr\") pod \"8a8b530f-4ae9-45a2-9a70-bba160dec46c\" (UID: \"8a8b530f-4ae9-45a2-9a70-bba160dec46c\") " Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.610213 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a8b530f-4ae9-45a2-9a70-bba160dec46c-operator-scripts\") pod \"8a8b530f-4ae9-45a2-9a70-bba160dec46c\" (UID: \"8a8b530f-4ae9-45a2-9a70-bba160dec46c\") " Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.611431 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a8b530f-4ae9-45a2-9a70-bba160dec46c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a8b530f-4ae9-45a2-9a70-bba160dec46c" (UID: "8a8b530f-4ae9-45a2-9a70-bba160dec46c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.617736 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a8b530f-4ae9-45a2-9a70-bba160dec46c-kube-api-access-w7jxr" (OuterVolumeSpecName: "kube-api-access-w7jxr") pod "8a8b530f-4ae9-45a2-9a70-bba160dec46c" (UID: "8a8b530f-4ae9-45a2-9a70-bba160dec46c"). InnerVolumeSpecName "kube-api-access-w7jxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.624509 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-96f0-account-create-update-lqrtl" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.683402 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kzz6j" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.712311 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxbq2\" (UniqueName: \"kubernetes.io/projected/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-kube-api-access-qxbq2\") pod \"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329\" (UID: \"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329\") " Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.712486 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4jps\" (UniqueName: \"kubernetes.io/projected/79b83426-8fd6-49cd-8788-b4f7c0bb2216-kube-api-access-r4jps\") pod \"79b83426-8fd6-49cd-8788-b4f7c0bb2216\" (UID: \"79b83426-8fd6-49cd-8788-b4f7c0bb2216\") " Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.712596 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b83426-8fd6-49cd-8788-b4f7c0bb2216-operator-scripts\") pod \"79b83426-8fd6-49cd-8788-b4f7c0bb2216\" (UID: \"79b83426-8fd6-49cd-8788-b4f7c0bb2216\") " Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.712899 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-operator-scripts\") pod \"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329\" (UID: \"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329\") " Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.713034 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5frf8\" (UniqueName: \"kubernetes.io/projected/27ac0661-2a31-41bb-9dad-adee7c8dddf5-kube-api-access-5frf8\") pod \"27ac0661-2a31-41bb-9dad-adee7c8dddf5\" (UID: \"27ac0661-2a31-41bb-9dad-adee7c8dddf5\") " Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.713088 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ac0661-2a31-41bb-9dad-adee7c8dddf5-operator-scripts\") pod \"27ac0661-2a31-41bb-9dad-adee7c8dddf5\" (UID: \"27ac0661-2a31-41bb-9dad-adee7c8dddf5\") " Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.713544 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7jxr\" (UniqueName: \"kubernetes.io/projected/8a8b530f-4ae9-45a2-9a70-bba160dec46c-kube-api-access-w7jxr\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.713565 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a8b530f-4ae9-45a2-9a70-bba160dec46c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.714147 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27ac0661-2a31-41bb-9dad-adee7c8dddf5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27ac0661-2a31-41bb-9dad-adee7c8dddf5" (UID: "27ac0661-2a31-41bb-9dad-adee7c8dddf5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.715501 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329" (UID: "413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.716636 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b83426-8fd6-49cd-8788-b4f7c0bb2216-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79b83426-8fd6-49cd-8788-b4f7c0bb2216" (UID: "79b83426-8fd6-49cd-8788-b4f7c0bb2216"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.719148 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ac0661-2a31-41bb-9dad-adee7c8dddf5-kube-api-access-5frf8" (OuterVolumeSpecName: "kube-api-access-5frf8") pod "27ac0661-2a31-41bb-9dad-adee7c8dddf5" (UID: "27ac0661-2a31-41bb-9dad-adee7c8dddf5"). InnerVolumeSpecName "kube-api-access-5frf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.719193 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-kube-api-access-qxbq2" (OuterVolumeSpecName: "kube-api-access-qxbq2") pod "413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329" (UID: "413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329"). InnerVolumeSpecName "kube-api-access-qxbq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.720337 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b83426-8fd6-49cd-8788-b4f7c0bb2216-kube-api-access-r4jps" (OuterVolumeSpecName: "kube-api-access-r4jps") pod "79b83426-8fd6-49cd-8788-b4f7c0bb2216" (UID: "79b83426-8fd6-49cd-8788-b4f7c0bb2216"). InnerVolumeSpecName "kube-api-access-r4jps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.816104 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxbq2\" (UniqueName: \"kubernetes.io/projected/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-kube-api-access-qxbq2\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.816149 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4jps\" (UniqueName: \"kubernetes.io/projected/79b83426-8fd6-49cd-8788-b4f7c0bb2216-kube-api-access-r4jps\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.816171 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b83426-8fd6-49cd-8788-b4f7c0bb2216-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.816188 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.816204 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5frf8\" (UniqueName: \"kubernetes.io/projected/27ac0661-2a31-41bb-9dad-adee7c8dddf5-kube-api-access-5frf8\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.816220 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ac0661-2a31-41bb-9dad-adee7c8dddf5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.984808 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kzz6j" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.984795 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kzz6j" event={"ID":"79b83426-8fd6-49cd-8788-b4f7c0bb2216","Type":"ContainerDied","Data":"720d5c370e6081f2352af18879a942c2112e693776e18a1cd1379f96e4db05ea"} Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.985167 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="720d5c370e6081f2352af18879a942c2112e693776e18a1cd1379f96e4db05ea" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.986335 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-87bfw" event={"ID":"413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329","Type":"ContainerDied","Data":"8085518c41c61e63e7553abe6ca0266d62934d2a7ae59a65ee203bcf99ad666b"} Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.986367 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-87bfw" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.986371 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8085518c41c61e63e7553abe6ca0266d62934d2a7ae59a65ee203bcf99ad666b" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.987917 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-96f0-account-create-update-lqrtl" event={"ID":"27ac0661-2a31-41bb-9dad-adee7c8dddf5","Type":"ContainerDied","Data":"efe6a71190c8c7dbf25fdaa52d2a1886688533fe9aa3fb0c57af23985d0a05fd"} Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.987938 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-96f0-account-create-update-lqrtl" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.987948 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efe6a71190c8c7dbf25fdaa52d2a1886688533fe9aa3fb0c57af23985d0a05fd" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.989584 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x5w44" event={"ID":"817c084d-f62f-49b2-8482-e37c799af743","Type":"ContainerStarted","Data":"3e43e683d4f86cfa99637f44e3d2a0f8278f06332d914db1dfaabe6205cf4905"} Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.993798 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a0783093-5301-4381-adfe-dc3d027975f8","Type":"ContainerStarted","Data":"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284"} Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.996874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-24db-account-create-update-2mfcv" event={"ID":"8a8b530f-4ae9-45a2-9a70-bba160dec46c","Type":"ContainerDied","Data":"05e7b0a7dd0500fa6e198d5b2373d85c4a2d08635b65066fc94a88be9211bbde"} Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.996900 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05e7b0a7dd0500fa6e198d5b2373d85c4a2d08635b65066fc94a88be9211bbde" Jan 26 12:59:03 crc kubenswrapper[4881]: I0126 12:59:03.996950 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-24db-account-create-update-2mfcv" Jan 26 12:59:04 crc kubenswrapper[4881]: I0126 12:59:04.017029 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x5w44" podStartSLOduration=2.103433486 podStartE2EDuration="7.017012865s" podCreationTimestamp="2026-01-26 12:58:57 +0000 UTC" firstStartedPulling="2026-01-26 12:58:58.451063256 +0000 UTC m=+1410.930373282" lastFinishedPulling="2026-01-26 12:59:03.364642625 +0000 UTC m=+1415.843952661" observedRunningTime="2026-01-26 12:59:04.009238805 +0000 UTC m=+1416.488548861" watchObservedRunningTime="2026-01-26 12:59:04.017012865 +0000 UTC m=+1416.496322891" Jan 26 12:59:04 crc kubenswrapper[4881]: I0126 12:59:04.817744 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:59:04 crc kubenswrapper[4881]: I0126 12:59:04.886104 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-z9nsc"] Jan 26 12:59:04 crc kubenswrapper[4881]: I0126 12:59:04.886417 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" podUID="908ed279-9514-43e0-a6a7-2ed24cfe34da" containerName="dnsmasq-dns" containerID="cri-o://5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8" gracePeriod=10 Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.003322 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.038016 4881 generic.go:334] "Generic (PLEG): container finished" podID="908ed279-9514-43e0-a6a7-2ed24cfe34da" containerID="5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8" exitCode=0 Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.038070 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.038067 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" event={"ID":"908ed279-9514-43e0-a6a7-2ed24cfe34da","Type":"ContainerDied","Data":"5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8"} Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.038208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc6fcf45-z9nsc" event={"ID":"908ed279-9514-43e0-a6a7-2ed24cfe34da","Type":"ContainerDied","Data":"3f68b100a7f0627ae32320bbc6f06be54d2828fa90aba9612f3c3a26ad5692ee"} Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.038244 4881 scope.go:117] "RemoveContainer" containerID="5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.071211 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-nb\") pod \"908ed279-9514-43e0-a6a7-2ed24cfe34da\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.071375 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-dns-svc\") pod \"908ed279-9514-43e0-a6a7-2ed24cfe34da\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.071406 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-config\") pod \"908ed279-9514-43e0-a6a7-2ed24cfe34da\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.071452 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-sb\") pod \"908ed279-9514-43e0-a6a7-2ed24cfe34da\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.071559 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqqq2\" (UniqueName: \"kubernetes.io/projected/908ed279-9514-43e0-a6a7-2ed24cfe34da-kube-api-access-gqqq2\") pod \"908ed279-9514-43e0-a6a7-2ed24cfe34da\" (UID: \"908ed279-9514-43e0-a6a7-2ed24cfe34da\") " Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.080293 4881 scope.go:117] "RemoveContainer" containerID="60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.117216 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908ed279-9514-43e0-a6a7-2ed24cfe34da-kube-api-access-gqqq2" (OuterVolumeSpecName: "kube-api-access-gqqq2") pod "908ed279-9514-43e0-a6a7-2ed24cfe34da" (UID: "908ed279-9514-43e0-a6a7-2ed24cfe34da"). InnerVolumeSpecName "kube-api-access-gqqq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.158691 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-config" (OuterVolumeSpecName: "config") pod "908ed279-9514-43e0-a6a7-2ed24cfe34da" (UID: "908ed279-9514-43e0-a6a7-2ed24cfe34da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.173623 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.173655 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqqq2\" (UniqueName: \"kubernetes.io/projected/908ed279-9514-43e0-a6a7-2ed24cfe34da-kube-api-access-gqqq2\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.329198 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "908ed279-9514-43e0-a6a7-2ed24cfe34da" (UID: "908ed279-9514-43e0-a6a7-2ed24cfe34da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.375072 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "908ed279-9514-43e0-a6a7-2ed24cfe34da" (UID: "908ed279-9514-43e0-a6a7-2ed24cfe34da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.375442 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "908ed279-9514-43e0-a6a7-2ed24cfe34da" (UID: "908ed279-9514-43e0-a6a7-2ed24cfe34da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.376646 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.376676 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.376689 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/908ed279-9514-43e0-a6a7-2ed24cfe34da-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.487362 4881 scope.go:117] "RemoveContainer" containerID="5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8" Jan 26 12:59:06 crc kubenswrapper[4881]: E0126 12:59:06.487815 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8\": container with ID starting with 5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8 not found: ID does not exist" containerID="5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.487853 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8"} err="failed to get container status \"5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8\": rpc error: code = NotFound desc = could not find container \"5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8\": container with ID starting with 5592ff0d7d3f4728eb0181c05d08974ecf98a4785733caedce48ab181d5598d8 not found: ID does not exist" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.487879 4881 scope.go:117] "RemoveContainer" containerID="60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18" Jan 26 12:59:06 crc kubenswrapper[4881]: E0126 12:59:06.488128 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18\": container with ID starting with 60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18 not found: ID does not exist" containerID="60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.488153 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18"} err="failed to get container status \"60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18\": rpc error: code = NotFound desc = could not find container \"60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18\": container with ID starting with 60ee11782b8ec701dccf483e9edac178207d859a8a962be904be8cfe4fbceb18 not found: ID does not exist" Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.686272 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-z9nsc"] Jan 26 12:59:06 crc kubenswrapper[4881]: I0126 12:59:06.695211 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-z9nsc"] Jan 26 12:59:07 crc kubenswrapper[4881]: I0126 12:59:07.230774 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 12:59:07 crc kubenswrapper[4881]: I0126 12:59:07.524759 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.061414 4881 generic.go:334] "Generic (PLEG): container finished" podID="817c084d-f62f-49b2-8482-e37c799af743" containerID="3e43e683d4f86cfa99637f44e3d2a0f8278f06332d914db1dfaabe6205cf4905" exitCode=0 Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.061537 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x5w44" event={"ID":"817c084d-f62f-49b2-8482-e37c799af743","Type":"ContainerDied","Data":"3e43e683d4f86cfa99637f44e3d2a0f8278f06332d914db1dfaabe6205cf4905"} Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.065067 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a0783093-5301-4381-adfe-dc3d027975f8","Type":"ContainerStarted","Data":"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef"} Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.065100 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a0783093-5301-4381-adfe-dc3d027975f8","Type":"ContainerStarted","Data":"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59"} Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.091390 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908ed279-9514-43e0-a6a7-2ed24cfe34da" path="/var/lib/kubelet/pods/908ed279-9514-43e0-a6a7-2ed24cfe34da/volumes" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.116144 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.116128561 podStartE2EDuration="15.116128561s" podCreationTimestamp="2026-01-26 12:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:08.113035446 +0000 UTC m=+1420.592345472" watchObservedRunningTime="2026-01-26 12:59:08.116128561 +0000 UTC m=+1420.595438587" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.425600 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.427473 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.441792 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952302 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ddrsw"] Jan 26 12:59:08 crc kubenswrapper[4881]: E0126 12:59:08.952648 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908ed279-9514-43e0-a6a7-2ed24cfe34da" containerName="init" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952665 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="908ed279-9514-43e0-a6a7-2ed24cfe34da" containerName="init" Jan 26 12:59:08 crc kubenswrapper[4881]: E0126 12:59:08.952684 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908ed279-9514-43e0-a6a7-2ed24cfe34da" containerName="dnsmasq-dns" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952692 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="908ed279-9514-43e0-a6a7-2ed24cfe34da" containerName="dnsmasq-dns" Jan 26 12:59:08 crc kubenswrapper[4881]: E0126 12:59:08.952707 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8b530f-4ae9-45a2-9a70-bba160dec46c" containerName="mariadb-account-create-update" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952713 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8b530f-4ae9-45a2-9a70-bba160dec46c" containerName="mariadb-account-create-update" Jan 26 12:59:08 crc kubenswrapper[4881]: E0126 12:59:08.952729 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b83426-8fd6-49cd-8788-b4f7c0bb2216" containerName="mariadb-database-create" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952735 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b83426-8fd6-49cd-8788-b4f7c0bb2216" containerName="mariadb-database-create" Jan 26 12:59:08 crc kubenswrapper[4881]: E0126 12:59:08.952742 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329" containerName="mariadb-database-create" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952749 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329" containerName="mariadb-database-create" Jan 26 12:59:08 crc kubenswrapper[4881]: E0126 12:59:08.952761 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ac0661-2a31-41bb-9dad-adee7c8dddf5" containerName="mariadb-account-create-update" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952767 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ac0661-2a31-41bb-9dad-adee7c8dddf5" containerName="mariadb-account-create-update" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952915 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b83426-8fd6-49cd-8788-b4f7c0bb2216" containerName="mariadb-database-create" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952928 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ac0661-2a31-41bb-9dad-adee7c8dddf5" containerName="mariadb-account-create-update" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952939 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329" containerName="mariadb-database-create" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952951 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a8b530f-4ae9-45a2-9a70-bba160dec46c" containerName="mariadb-account-create-update" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.952961 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="908ed279-9514-43e0-a6a7-2ed24cfe34da" containerName="dnsmasq-dns" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.953470 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ddrsw" Jan 26 12:59:08 crc kubenswrapper[4881]: I0126 12:59:08.964419 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ddrsw"] Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.091753 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ad3b-account-create-update-cf2rd"] Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.094602 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ad3b-account-create-update-cf2rd" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.095858 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.097686 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.115243 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ad3b-account-create-update-cf2rd"] Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.125652 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87493a10-28dd-468d-82df-d225543ffd0e-operator-scripts\") pod \"glance-db-create-ddrsw\" (UID: \"87493a10-28dd-468d-82df-d225543ffd0e\") " pod="openstack/glance-db-create-ddrsw" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.125883 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl9p6\" (UniqueName: \"kubernetes.io/projected/87493a10-28dd-468d-82df-d225543ffd0e-kube-api-access-gl9p6\") pod \"glance-db-create-ddrsw\" (UID: \"87493a10-28dd-468d-82df-d225543ffd0e\") " pod="openstack/glance-db-create-ddrsw" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.137991 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-5pg9p"] Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.139306 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.144789 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-55qnx" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.145068 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.187336 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-5pg9p"] Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.226427 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2sp7f"] Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.227391 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87493a10-28dd-468d-82df-d225543ffd0e-operator-scripts\") pod \"glance-db-create-ddrsw\" (UID: \"87493a10-28dd-468d-82df-d225543ffd0e\") " pod="openstack/glance-db-create-ddrsw" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.227575 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl9p6\" (UniqueName: \"kubernetes.io/projected/87493a10-28dd-468d-82df-d225543ffd0e-kube-api-access-gl9p6\") pod \"glance-db-create-ddrsw\" (UID: \"87493a10-28dd-468d-82df-d225543ffd0e\") " pod="openstack/glance-db-create-ddrsw" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.227628 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18188e05-15cd-421d-9ad4-a68243fa2d84-operator-scripts\") pod \"glance-ad3b-account-create-update-cf2rd\" (UID: \"18188e05-15cd-421d-9ad4-a68243fa2d84\") " pod="openstack/glance-ad3b-account-create-update-cf2rd" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.227667 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2prlp\" (UniqueName: \"kubernetes.io/projected/adf01549-e1d0-46a7-a141-bdc0f5c81458-kube-api-access-2prlp\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.227736 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-db-sync-config-data\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.227766 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-config-data\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.227892 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-combined-ca-bundle\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.227913 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4bsw\" (UniqueName: \"kubernetes.io/projected/18188e05-15cd-421d-9ad4-a68243fa2d84-kube-api-access-s4bsw\") pod \"glance-ad3b-account-create-update-cf2rd\" (UID: \"18188e05-15cd-421d-9ad4-a68243fa2d84\") " pod="openstack/glance-ad3b-account-create-update-cf2rd" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.228259 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2sp7f" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.229246 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87493a10-28dd-468d-82df-d225543ffd0e-operator-scripts\") pod \"glance-db-create-ddrsw\" (UID: \"87493a10-28dd-468d-82df-d225543ffd0e\") " pod="openstack/glance-db-create-ddrsw" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.254735 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2sp7f"] Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.276468 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl9p6\" (UniqueName: \"kubernetes.io/projected/87493a10-28dd-468d-82df-d225543ffd0e-kube-api-access-gl9p6\") pod \"glance-db-create-ddrsw\" (UID: \"87493a10-28dd-468d-82df-d225543ffd0e\") " pod="openstack/glance-db-create-ddrsw" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.297327 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ddrsw" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.304914 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-445e-account-create-update-hgkn9"] Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.306736 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-445e-account-create-update-hgkn9" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.311848 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.331111 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-operator-scripts\") pod \"neutron-db-create-2sp7f\" (UID: \"8a3edaef-3ff0-45b1-b037-ca545d1bd9af\") " pod="openstack/neutron-db-create-2sp7f" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.331181 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-combined-ca-bundle\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.331213 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4bsw\" (UniqueName: \"kubernetes.io/projected/18188e05-15cd-421d-9ad4-a68243fa2d84-kube-api-access-s4bsw\") pod \"glance-ad3b-account-create-update-cf2rd\" (UID: \"18188e05-15cd-421d-9ad4-a68243fa2d84\") " pod="openstack/glance-ad3b-account-create-update-cf2rd" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.331290 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18188e05-15cd-421d-9ad4-a68243fa2d84-operator-scripts\") pod \"glance-ad3b-account-create-update-cf2rd\" (UID: \"18188e05-15cd-421d-9ad4-a68243fa2d84\") " pod="openstack/glance-ad3b-account-create-update-cf2rd" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.331327 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2prlp\" (UniqueName: \"kubernetes.io/projected/adf01549-e1d0-46a7-a141-bdc0f5c81458-kube-api-access-2prlp\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.331376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-db-sync-config-data\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.331402 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvp6s\" (UniqueName: \"kubernetes.io/projected/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-kube-api-access-zvp6s\") pod \"neutron-db-create-2sp7f\" (UID: \"8a3edaef-3ff0-45b1-b037-ca545d1bd9af\") " pod="openstack/neutron-db-create-2sp7f" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.331438 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-config-data\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.338381 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18188e05-15cd-421d-9ad4-a68243fa2d84-operator-scripts\") pod \"glance-ad3b-account-create-update-cf2rd\" (UID: \"18188e05-15cd-421d-9ad4-a68243fa2d84\") " pod="openstack/glance-ad3b-account-create-update-cf2rd" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.342026 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-445e-account-create-update-hgkn9"] Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.346065 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-combined-ca-bundle\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.346874 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-db-sync-config-data\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.356749 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-config-data\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.368873 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2prlp\" (UniqueName: \"kubernetes.io/projected/adf01549-e1d0-46a7-a141-bdc0f5c81458-kube-api-access-2prlp\") pod \"watcher-db-sync-5pg9p\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.379020 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4bsw\" (UniqueName: \"kubernetes.io/projected/18188e05-15cd-421d-9ad4-a68243fa2d84-kube-api-access-s4bsw\") pod \"glance-ad3b-account-create-update-cf2rd\" (UID: \"18188e05-15cd-421d-9ad4-a68243fa2d84\") " pod="openstack/glance-ad3b-account-create-update-cf2rd" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.415755 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ad3b-account-create-update-cf2rd" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.439610 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-operator-scripts\") pod \"neutron-db-create-2sp7f\" (UID: \"8a3edaef-3ff0-45b1-b037-ca545d1bd9af\") " pod="openstack/neutron-db-create-2sp7f" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.439840 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvrxs\" (UniqueName: \"kubernetes.io/projected/8395a76c-6569-43c6-ba18-438efdb98980-kube-api-access-bvrxs\") pod \"neutron-445e-account-create-update-hgkn9\" (UID: \"8395a76c-6569-43c6-ba18-438efdb98980\") " pod="openstack/neutron-445e-account-create-update-hgkn9" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.439955 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8395a76c-6569-43c6-ba18-438efdb98980-operator-scripts\") pod \"neutron-445e-account-create-update-hgkn9\" (UID: \"8395a76c-6569-43c6-ba18-438efdb98980\") " pod="openstack/neutron-445e-account-create-update-hgkn9" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.439982 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvp6s\" (UniqueName: \"kubernetes.io/projected/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-kube-api-access-zvp6s\") pod \"neutron-db-create-2sp7f\" (UID: \"8a3edaef-3ff0-45b1-b037-ca545d1bd9af\") " pod="openstack/neutron-db-create-2sp7f" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.441052 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-operator-scripts\") pod \"neutron-db-create-2sp7f\" (UID: \"8a3edaef-3ff0-45b1-b037-ca545d1bd9af\") " pod="openstack/neutron-db-create-2sp7f" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.458628 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvp6s\" (UniqueName: \"kubernetes.io/projected/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-kube-api-access-zvp6s\") pod \"neutron-db-create-2sp7f\" (UID: \"8a3edaef-3ff0-45b1-b037-ca545d1bd9af\") " pod="openstack/neutron-db-create-2sp7f" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.489412 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-5pg9p" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.540738 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x5w44" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.541138 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvrxs\" (UniqueName: \"kubernetes.io/projected/8395a76c-6569-43c6-ba18-438efdb98980-kube-api-access-bvrxs\") pod \"neutron-445e-account-create-update-hgkn9\" (UID: \"8395a76c-6569-43c6-ba18-438efdb98980\") " pod="openstack/neutron-445e-account-create-update-hgkn9" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.541214 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8395a76c-6569-43c6-ba18-438efdb98980-operator-scripts\") pod \"neutron-445e-account-create-update-hgkn9\" (UID: \"8395a76c-6569-43c6-ba18-438efdb98980\") " pod="openstack/neutron-445e-account-create-update-hgkn9" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.541950 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8395a76c-6569-43c6-ba18-438efdb98980-operator-scripts\") pod \"neutron-445e-account-create-update-hgkn9\" (UID: \"8395a76c-6569-43c6-ba18-438efdb98980\") " pod="openstack/neutron-445e-account-create-update-hgkn9" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.558263 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvrxs\" (UniqueName: \"kubernetes.io/projected/8395a76c-6569-43c6-ba18-438efdb98980-kube-api-access-bvrxs\") pod \"neutron-445e-account-create-update-hgkn9\" (UID: \"8395a76c-6569-43c6-ba18-438efdb98980\") " pod="openstack/neutron-445e-account-create-update-hgkn9" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.623842 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2sp7f" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.642435 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf4z9\" (UniqueName: \"kubernetes.io/projected/817c084d-f62f-49b2-8482-e37c799af743-kube-api-access-sf4z9\") pod \"817c084d-f62f-49b2-8482-e37c799af743\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.642524 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-config-data\") pod \"817c084d-f62f-49b2-8482-e37c799af743\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.642690 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-combined-ca-bundle\") pod \"817c084d-f62f-49b2-8482-e37c799af743\" (UID: \"817c084d-f62f-49b2-8482-e37c799af743\") " Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.648251 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817c084d-f62f-49b2-8482-e37c799af743-kube-api-access-sf4z9" (OuterVolumeSpecName: "kube-api-access-sf4z9") pod "817c084d-f62f-49b2-8482-e37c799af743" (UID: "817c084d-f62f-49b2-8482-e37c799af743"). InnerVolumeSpecName "kube-api-access-sf4z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.677532 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "817c084d-f62f-49b2-8482-e37c799af743" (UID: "817c084d-f62f-49b2-8482-e37c799af743"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.711547 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-config-data" (OuterVolumeSpecName: "config-data") pod "817c084d-f62f-49b2-8482-e37c799af743" (UID: "817c084d-f62f-49b2-8482-e37c799af743"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.734812 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-445e-account-create-update-hgkn9" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.744356 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.744642 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf4z9\" (UniqueName: \"kubernetes.io/projected/817c084d-f62f-49b2-8482-e37c799af743-kube-api-access-sf4z9\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:09 crc kubenswrapper[4881]: I0126 12:59:09.744658 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817c084d-f62f-49b2-8482-e37c799af743-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:09.889998 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ddrsw"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:09.908782 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-5pg9p"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:09.986401 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ad3b-account-create-update-cf2rd"] Jan 26 12:59:11 crc kubenswrapper[4881]: W0126 12:59:09.996581 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18188e05_15cd_421d_9ad4_a68243fa2d84.slice/crio-6e38130e2a44b2648aaf0e6d5dfcf57d87abfb3fff53af8aa4752fbd434c4dcd WatchSource:0}: Error finding container 6e38130e2a44b2648aaf0e6d5dfcf57d87abfb3fff53af8aa4752fbd434c4dcd: Status 404 returned error can't find the container with id 6e38130e2a44b2648aaf0e6d5dfcf57d87abfb3fff53af8aa4752fbd434c4dcd Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.096710 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ddrsw" event={"ID":"87493a10-28dd-468d-82df-d225543ffd0e","Type":"ContainerStarted","Data":"2e9d05f5ed6b961c4b9d4321fcd14c4956a04194e9371595e0de619c2ea2bd64"} Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.097071 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ad3b-account-create-update-cf2rd" event={"ID":"18188e05-15cd-421d-9ad4-a68243fa2d84","Type":"ContainerStarted","Data":"6e38130e2a44b2648aaf0e6d5dfcf57d87abfb3fff53af8aa4752fbd434c4dcd"} Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.098083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-5pg9p" event={"ID":"adf01549-e1d0-46a7-a141-bdc0f5c81458","Type":"ContainerStarted","Data":"279d38e7433d24020184e4b6f315442fc2afbe8630e715d7e2a20de0ac3836a3"} Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.101356 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x5w44" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.101993 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x5w44" event={"ID":"817c084d-f62f-49b2-8482-e37c799af743","Type":"ContainerDied","Data":"6477f309805a1f1b0b12d5ab575f9f79bd5fdc58bae3d6788dbcb9919efb8611"} Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.102050 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6477f309805a1f1b0b12d5ab575f9f79bd5fdc58bae3d6788dbcb9919efb8611" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.295186 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-vk4hf"] Jan 26 12:59:11 crc kubenswrapper[4881]: E0126 12:59:10.295936 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817c084d-f62f-49b2-8482-e37c799af743" containerName="keystone-db-sync" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.295951 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="817c084d-f62f-49b2-8482-e37c799af743" containerName="keystone-db-sync" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.296161 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="817c084d-f62f-49b2-8482-e37c799af743" containerName="keystone-db-sync" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.313244 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.315900 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-vk4hf"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.334850 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s257b"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.337786 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.340256 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.340436 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.340698 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p4c6x" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.340866 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.341735 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.349216 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s257b"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.428631 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d58d86989-8cl5p"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.429923 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.431742 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.431928 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.432689 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-9x52p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.435052 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.454064 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d58d86989-8cl5p"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.459600 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-credential-keys\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.459652 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-fernet-keys\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.459702 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.459725 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-config-data\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.459844 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.459951 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmskp\" (UniqueName: \"kubernetes.io/projected/57ecf117-d780-4424-a2d7-8d30c280d0c3-kube-api-access-hmskp\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.459980 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-config\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.460040 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-scripts\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.460063 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.460195 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-combined-ca-bundle\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.460216 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd27l\" (UniqueName: \"kubernetes.io/projected/450091ac-d618-40d8-9f54-7fb0e02bb9d0-kube-api-access-kd27l\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.460276 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-svc\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.496980 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-lgk7s"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.497979 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.503167 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.503195 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2m6g5" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.514868 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lgk7s"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.516300 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568148 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-scripts\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568194 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-scripts\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568249 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-credential-keys\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568275 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-fernet-keys\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568314 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvtrh\" (UniqueName: \"kubernetes.io/projected/132298e2-a2f4-4311-9f7a-3e4e08abe34b-kube-api-access-hvtrh\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568341 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568361 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-config-data\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568388 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568407 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-horizon-secret-key\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568444 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-config-data\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568462 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmskp\" (UniqueName: \"kubernetes.io/projected/57ecf117-d780-4424-a2d7-8d30c280d0c3-kube-api-access-hmskp\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568489 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-config\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568503 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/132298e2-a2f4-4311-9f7a-3e4e08abe34b-etc-machine-id\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568541 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-logs\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568564 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-scripts\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568581 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568601 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-combined-ca-bundle\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568649 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-combined-ca-bundle\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568667 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd27l\" (UniqueName: \"kubernetes.io/projected/450091ac-d618-40d8-9f54-7fb0e02bb9d0-kube-api-access-kd27l\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568682 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-config-data\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568708 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-svc\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568726 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzzrh\" (UniqueName: \"kubernetes.io/projected/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-kube-api-access-bzzrh\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.568748 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-db-sync-config-data\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.572301 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.572832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.575547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.579185 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-fernet-keys\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.580045 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-config\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.580337 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-credential-keys\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.580922 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-svc\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.582390 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-scripts\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.583840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-config-data\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.586912 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-combined-ca-bundle\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.597580 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69c8959d97-5f2wg"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.598875 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.601711 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c8959d97-5f2wg"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.607475 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd27l\" (UniqueName: \"kubernetes.io/projected/450091ac-d618-40d8-9f54-7fb0e02bb9d0-kube-api-access-kd27l\") pod \"keystone-bootstrap-s257b\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.641509 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-t85k4"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.642657 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.645447 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vsl7c" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.645739 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.651911 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.673930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-config-data\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.673983 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzzrh\" (UniqueName: \"kubernetes.io/projected/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-kube-api-access-bzzrh\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674015 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-db-sync-config-data\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674036 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-scripts\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674059 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-scripts\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674093 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d2634ed-f529-43dc-8a08-54f97ace0d73-logs\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674119 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9s6r\" (UniqueName: \"kubernetes.io/projected/5d2634ed-f529-43dc-8a08-54f97ace0d73-kube-api-access-s9s6r\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674133 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-scripts\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674155 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvtrh\" (UniqueName: \"kubernetes.io/projected/132298e2-a2f4-4311-9f7a-3e4e08abe34b-kube-api-access-hvtrh\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674185 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-horizon-secret-key\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674212 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-config-data\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674228 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d2634ed-f529-43dc-8a08-54f97ace0d73-horizon-secret-key\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674253 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/132298e2-a2f4-4311-9f7a-3e4e08abe34b-etc-machine-id\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674275 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-config-data\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674291 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-logs\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.674322 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-combined-ca-bundle\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.678130 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-combined-ca-bundle\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.690555 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-config-data\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.690937 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-horizon-secret-key\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.691620 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-config-data\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.691716 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/132298e2-a2f4-4311-9f7a-3e4e08abe34b-etc-machine-id\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.691941 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-logs\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.704157 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-scripts\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.714119 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzzrh\" (UniqueName: \"kubernetes.io/projected/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-kube-api-access-bzzrh\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.714308 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-db-sync-config-data\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.721443 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvtrh\" (UniqueName: \"kubernetes.io/projected/132298e2-a2f4-4311-9f7a-3e4e08abe34b-kube-api-access-hvtrh\") pod \"cinder-db-sync-lgk7s\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.746618 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-t85k4"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.766963 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-scripts\") pod \"horizon-6d58d86989-8cl5p\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.784448 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkhwh\" (UniqueName: \"kubernetes.io/projected/d0fc8471-ad65-44cf-bf03-1c037aafdf11-kube-api-access-tkhwh\") pod \"barbican-db-sync-t85k4\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.784570 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d2634ed-f529-43dc-8a08-54f97ace0d73-logs\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.784599 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9s6r\" (UniqueName: \"kubernetes.io/projected/5d2634ed-f529-43dc-8a08-54f97ace0d73-kube-api-access-s9s6r\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.784622 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-scripts\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.784651 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-db-sync-config-data\") pod \"barbican-db-sync-t85k4\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.784681 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d2634ed-f529-43dc-8a08-54f97ace0d73-horizon-secret-key\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.784718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-combined-ca-bundle\") pod \"barbican-db-sync-t85k4\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.784741 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-config-data\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.786355 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-scripts\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.786609 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d2634ed-f529-43dc-8a08-54f97ace0d73-logs\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.794620 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-config-data\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.805903 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d2634ed-f529-43dc-8a08-54f97ace0d73-horizon-secret-key\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.838642 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lgk7s" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.859705 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9s6r\" (UniqueName: \"kubernetes.io/projected/5d2634ed-f529-43dc-8a08-54f97ace0d73-kube-api-access-s9s6r\") pod \"horizon-69c8959d97-5f2wg\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.860146 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-vk4hf"] Jan 26 12:59:11 crc kubenswrapper[4881]: E0126 12:59:10.860795 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-hmskp], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" podUID="57ecf117-d780-4424-a2d7-8d30c280d0c3" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.886025 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkhwh\" (UniqueName: \"kubernetes.io/projected/d0fc8471-ad65-44cf-bf03-1c037aafdf11-kube-api-access-tkhwh\") pod \"barbican-db-sync-t85k4\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.886106 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-db-sync-config-data\") pod \"barbican-db-sync-t85k4\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.886149 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-combined-ca-bundle\") pod \"barbican-db-sync-t85k4\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.886805 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tqpzh"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.898260 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-combined-ca-bundle\") pod \"barbican-db-sync-t85k4\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.900053 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.915981 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-pr4r7"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.917447 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.923467 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tqpzh"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.930978 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-db-sync-config-data\") pod \"barbican-db-sync-t85k4\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.931668 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.931827 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nlt5l" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.931982 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.942184 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkhwh\" (UniqueName: \"kubernetes.io/projected/d0fc8471-ad65-44cf-bf03-1c037aafdf11-kube-api-access-tkhwh\") pod \"barbican-db-sync-t85k4\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.975924 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:10.976136 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-pr4r7"] Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.019268 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmskp\" (UniqueName: \"kubernetes.io/projected/57ecf117-d780-4424-a2d7-8d30c280d0c3-kube-api-access-hmskp\") pod \"dnsmasq-dns-5bb457dfc5-vk4hf\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.048066 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.087938 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-t85k4" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.098979 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-scripts\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.099020 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-logs\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.099039 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z7hk\" (UniqueName: \"kubernetes.io/projected/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-kube-api-access-9z7hk\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.099092 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-svc\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.099106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-sb\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.099140 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-swift-storage-0\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.099169 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dpnm\" (UniqueName: \"kubernetes.io/projected/75b1a7cd-6382-458b-8769-7f212bd59bf9-kube-api-access-9dpnm\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.099193 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-config-data\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.099209 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-config\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.099238 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-combined-ca-bundle\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.099262 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.144164 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ddrsw" event={"ID":"87493a10-28dd-468d-82df-d225543ffd0e","Type":"ContainerStarted","Data":"bebedd510aaa4fdc24d4500b04b59fb6022593d94c5a9ee0e91f7ad4de2bb9d1"} Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.153876 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ad3b-account-create-update-cf2rd" event={"ID":"18188e05-15cd-421d-9ad4-a68243fa2d84","Type":"ContainerStarted","Data":"4c3c13247a0ecc4fdb4f19bec6e9968d0a020c4dad3ad125a98a30a0669473f2"} Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.153968 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.183735 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-ddrsw" podStartSLOduration=3.183713632 podStartE2EDuration="3.183713632s" podCreationTimestamp="2026-01-26 12:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:11.168915052 +0000 UTC m=+1423.648225078" watchObservedRunningTime="2026-01-26 12:59:11.183713632 +0000 UTC m=+1423.663023658" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.187026 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202240 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-scripts\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202312 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-logs\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202337 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z7hk\" (UniqueName: \"kubernetes.io/projected/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-kube-api-access-9z7hk\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202403 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-svc\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-sb\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202467 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-swift-storage-0\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202505 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dpnm\" (UniqueName: \"kubernetes.io/projected/75b1a7cd-6382-458b-8769-7f212bd59bf9-kube-api-access-9dpnm\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202558 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-config-data\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202573 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-config\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-combined-ca-bundle\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.202642 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.205396 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.206234 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-logs\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.208804 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-config\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.209028 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-svc\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.209545 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-sb\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.210035 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-swift-storage-0\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.212142 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ad3b-account-create-update-cf2rd" podStartSLOduration=2.212127216 podStartE2EDuration="2.212127216s" podCreationTimestamp="2026-01-26 12:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:11.209890701 +0000 UTC m=+1423.689200727" watchObservedRunningTime="2026-01-26 12:59:11.212127216 +0000 UTC m=+1423.691437242" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.215040 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-scripts\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.217898 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-config-data\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.218077 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-combined-ca-bundle\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.226588 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z7hk\" (UniqueName: \"kubernetes.io/projected/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-kube-api-access-9z7hk\") pod \"placement-db-sync-tqpzh\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.246790 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dpnm\" (UniqueName: \"kubernetes.io/projected/75b1a7cd-6382-458b-8769-7f212bd59bf9-kube-api-access-9dpnm\") pod \"dnsmasq-dns-7fbb4d475f-pr4r7\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.264408 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqpzh" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.281959 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.304169 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-swift-storage-0\") pod \"57ecf117-d780-4424-a2d7-8d30c280d0c3\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.304279 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmskp\" (UniqueName: \"kubernetes.io/projected/57ecf117-d780-4424-a2d7-8d30c280d0c3-kube-api-access-hmskp\") pod \"57ecf117-d780-4424-a2d7-8d30c280d0c3\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.304351 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-nb\") pod \"57ecf117-d780-4424-a2d7-8d30c280d0c3\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.304410 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-config\") pod \"57ecf117-d780-4424-a2d7-8d30c280d0c3\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.304453 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-sb\") pod \"57ecf117-d780-4424-a2d7-8d30c280d0c3\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.305085 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57ecf117-d780-4424-a2d7-8d30c280d0c3" (UID: "57ecf117-d780-4424-a2d7-8d30c280d0c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.304534 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-svc\") pod \"57ecf117-d780-4424-a2d7-8d30c280d0c3\" (UID: \"57ecf117-d780-4424-a2d7-8d30c280d0c3\") " Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.305810 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "57ecf117-d780-4424-a2d7-8d30c280d0c3" (UID: "57ecf117-d780-4424-a2d7-8d30c280d0c3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.307063 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.307689 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.309696 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-config" (OuterVolumeSpecName: "config") pod "57ecf117-d780-4424-a2d7-8d30c280d0c3" (UID: "57ecf117-d780-4424-a2d7-8d30c280d0c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.309958 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "57ecf117-d780-4424-a2d7-8d30c280d0c3" (UID: "57ecf117-d780-4424-a2d7-8d30c280d0c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.310213 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "57ecf117-d780-4424-a2d7-8d30c280d0c3" (UID: "57ecf117-d780-4424-a2d7-8d30c280d0c3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.327628 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ecf117-d780-4424-a2d7-8d30c280d0c3-kube-api-access-hmskp" (OuterVolumeSpecName: "kube-api-access-hmskp") pod "57ecf117-d780-4424-a2d7-8d30c280d0c3" (UID: "57ecf117-d780-4424-a2d7-8d30c280d0c3"). InnerVolumeSpecName "kube-api-access-hmskp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.414141 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmskp\" (UniqueName: \"kubernetes.io/projected/57ecf117-d780-4424-a2d7-8d30c280d0c3-kube-api-access-hmskp\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.414164 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.414173 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:11 crc kubenswrapper[4881]: I0126 12:59:11.414181 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57ecf117-d780-4424-a2d7-8d30c280d0c3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.164383 4881 generic.go:334] "Generic (PLEG): container finished" podID="87493a10-28dd-468d-82df-d225543ffd0e" containerID="bebedd510aaa4fdc24d4500b04b59fb6022593d94c5a9ee0e91f7ad4de2bb9d1" exitCode=0 Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.164471 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ddrsw" event={"ID":"87493a10-28dd-468d-82df-d225543ffd0e","Type":"ContainerDied","Data":"bebedd510aaa4fdc24d4500b04b59fb6022593d94c5a9ee0e91f7ad4de2bb9d1"} Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.171276 4881 generic.go:334] "Generic (PLEG): container finished" podID="18188e05-15cd-421d-9ad4-a68243fa2d84" containerID="4c3c13247a0ecc4fdb4f19bec6e9968d0a020c4dad3ad125a98a30a0669473f2" exitCode=0 Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.171365 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb457dfc5-vk4hf" Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.171787 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ad3b-account-create-update-cf2rd" event={"ID":"18188e05-15cd-421d-9ad4-a68243fa2d84","Type":"ContainerDied","Data":"4c3c13247a0ecc4fdb4f19bec6e9968d0a020c4dad3ad125a98a30a0669473f2"} Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.220000 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d58d86989-8cl5p"] Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.227000 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lgk7s"] Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.237454 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-vk4hf"] Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.254705 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-vk4hf"] Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.261867 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c8959d97-5f2wg"] Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.268499 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-pr4r7"] Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.277761 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-445e-account-create-update-hgkn9"] Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.283411 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s257b"] Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.288838 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2sp7f"] Jan 26 12:59:12 crc kubenswrapper[4881]: W0126 12:59:12.293808 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod450091ac_d618_40d8_9f54_7fb0e02bb9d0.slice/crio-e170f7e2a1e77fa45d97ed2c710fc1d53db3cd4d27330b507e53054516c62a29 WatchSource:0}: Error finding container e170f7e2a1e77fa45d97ed2c710fc1d53db3cd4d27330b507e53054516c62a29: Status 404 returned error can't find the container with id e170f7e2a1e77fa45d97ed2c710fc1d53db3cd4d27330b507e53054516c62a29 Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.296606 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-t85k4"] Jan 26 12:59:12 crc kubenswrapper[4881]: I0126 12:59:12.412634 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tqpzh"] Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.188785 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-t85k4" event={"ID":"d0fc8471-ad65-44cf-bf03-1c037aafdf11","Type":"ContainerStarted","Data":"f10927e932c0e8b17458e021c4b691abd26f8930a95218ba28585c089a71b3fe"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.191069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lgk7s" event={"ID":"132298e2-a2f4-4311-9f7a-3e4e08abe34b","Type":"ContainerStarted","Data":"db4823523de0c3d06f30e2cfd0d67f4b5e21207b8acb0859ea7339fd0f469632"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.197955 4881 generic.go:334] "Generic (PLEG): container finished" podID="75b1a7cd-6382-458b-8769-7f212bd59bf9" containerID="6a1c38c4b9acacbaed8099e6a11132e442fd80b20502374d87f5c65a3cf8a94a" exitCode=0 Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.198015 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" event={"ID":"75b1a7cd-6382-458b-8769-7f212bd59bf9","Type":"ContainerDied","Data":"6a1c38c4b9acacbaed8099e6a11132e442fd80b20502374d87f5c65a3cf8a94a"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.198040 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" event={"ID":"75b1a7cd-6382-458b-8769-7f212bd59bf9","Type":"ContainerStarted","Data":"774ff8e5d887b38ca8df3a9bf708edae90e80491219ee2ed5e163dd0100ad98c"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.201663 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d58d86989-8cl5p" event={"ID":"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7","Type":"ContainerStarted","Data":"6a8fa892f7e5458b1232df3805018dca7db3ecebb6c71bd35c212a843bd8c641"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.205789 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s257b" event={"ID":"450091ac-d618-40d8-9f54-7fb0e02bb9d0","Type":"ContainerStarted","Data":"216a2a795a5284e3fbc1a9d169045b67d442a3e7252b4aec3690c2243f4dc0d9"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.205828 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s257b" event={"ID":"450091ac-d618-40d8-9f54-7fb0e02bb9d0","Type":"ContainerStarted","Data":"e170f7e2a1e77fa45d97ed2c710fc1d53db3cd4d27330b507e53054516c62a29"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.218587 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-445e-account-create-update-hgkn9" event={"ID":"8395a76c-6569-43c6-ba18-438efdb98980","Type":"ContainerStarted","Data":"9e3689c99cb55c2a380e1c7d9d475b86b1b65c071da7533e6ea5cce0bb2bf71f"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.218630 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-445e-account-create-update-hgkn9" event={"ID":"8395a76c-6569-43c6-ba18-438efdb98980","Type":"ContainerStarted","Data":"ce8ab695ccdba623a47792494e7de21e0ddbc6fecaad2a10ecd85883575457e5"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.255122 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c8959d97-5f2wg" event={"ID":"5d2634ed-f529-43dc-8a08-54f97ace0d73","Type":"ContainerStarted","Data":"6935cd15743f6b8bf9aa47add14f8ee2bb949a8b58187a2e49a0d5d68bcf2ffc"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.288052 4881 generic.go:334] "Generic (PLEG): container finished" podID="8a3edaef-3ff0-45b1-b037-ca545d1bd9af" containerID="41abfaf409a233b07eeecd7c94fc06887a401e865fdbbafd767a35924a9c1d2f" exitCode=0 Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.288148 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2sp7f" event={"ID":"8a3edaef-3ff0-45b1-b037-ca545d1bd9af","Type":"ContainerDied","Data":"41abfaf409a233b07eeecd7c94fc06887a401e865fdbbafd767a35924a9c1d2f"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.288173 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2sp7f" event={"ID":"8a3edaef-3ff0-45b1-b037-ca545d1bd9af","Type":"ContainerStarted","Data":"7fed1d6050e69b79fc0209a6841dd9f00718ec7c10c4ae24f0b9ac5c53cbb3a2"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.319801 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqpzh" event={"ID":"741cf6ae-617a-440b-b6ec-63dc4e87ff4a","Type":"ContainerStarted","Data":"5e683037fe88a46637c1e0bac39224d753f0b51e3845ed2b5a3ba8f4fb444b44"} Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.337560 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-445e-account-create-update-hgkn9" podStartSLOduration=4.337534669 podStartE2EDuration="4.337534669s" podCreationTimestamp="2026-01-26 12:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:13.289953339 +0000 UTC m=+1425.769263365" watchObservedRunningTime="2026-01-26 12:59:13.337534669 +0000 UTC m=+1425.816844695" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.348273 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s257b" podStartSLOduration=3.34825494 podStartE2EDuration="3.34825494s" podCreationTimestamp="2026-01-26 12:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:13.33797191 +0000 UTC m=+1425.817281926" watchObservedRunningTime="2026-01-26 12:59:13.34825494 +0000 UTC m=+1425.827564966" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.661794 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.667890 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.672881 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.675177 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.675433 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.730291 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.730784 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbqg4\" (UniqueName: \"kubernetes.io/projected/75a85372-d728-4770-8639-fb6f93e44dab-kube-api-access-fbqg4\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.730908 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-config-data\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.731018 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.731057 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-scripts\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.731173 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-run-httpd\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.731199 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-log-httpd\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.826182 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ddrsw" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.831914 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87493a10-28dd-468d-82df-d225543ffd0e-operator-scripts\") pod \"87493a10-28dd-468d-82df-d225543ffd0e\" (UID: \"87493a10-28dd-468d-82df-d225543ffd0e\") " Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.832015 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl9p6\" (UniqueName: \"kubernetes.io/projected/87493a10-28dd-468d-82df-d225543ffd0e-kube-api-access-gl9p6\") pod \"87493a10-28dd-468d-82df-d225543ffd0e\" (UID: \"87493a10-28dd-468d-82df-d225543ffd0e\") " Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.832176 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-config-data\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.832229 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.832255 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-scripts\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.832310 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-run-httpd\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.832331 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-log-httpd\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.832361 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.832414 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbqg4\" (UniqueName: \"kubernetes.io/projected/75a85372-d728-4770-8639-fb6f93e44dab-kube-api-access-fbqg4\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.832966 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-run-httpd\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.833083 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-log-httpd\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.833876 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87493a10-28dd-468d-82df-d225543ffd0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87493a10-28dd-468d-82df-d225543ffd0e" (UID: "87493a10-28dd-468d-82df-d225543ffd0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.839351 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.840139 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-config-data\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.843065 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.870796 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbqg4\" (UniqueName: \"kubernetes.io/projected/75a85372-d728-4770-8639-fb6f93e44dab-kube-api-access-fbqg4\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.872379 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87493a10-28dd-468d-82df-d225543ffd0e-kube-api-access-gl9p6" (OuterVolumeSpecName: "kube-api-access-gl9p6") pod "87493a10-28dd-468d-82df-d225543ffd0e" (UID: "87493a10-28dd-468d-82df-d225543ffd0e"). InnerVolumeSpecName "kube-api-access-gl9p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.876254 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-scripts\") pod \"ceilometer-0\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " pod="openstack/ceilometer-0" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.905132 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ad3b-account-create-update-cf2rd" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.942481 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4bsw\" (UniqueName: \"kubernetes.io/projected/18188e05-15cd-421d-9ad4-a68243fa2d84-kube-api-access-s4bsw\") pod \"18188e05-15cd-421d-9ad4-a68243fa2d84\" (UID: \"18188e05-15cd-421d-9ad4-a68243fa2d84\") " Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.943145 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87493a10-28dd-468d-82df-d225543ffd0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.943167 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl9p6\" (UniqueName: \"kubernetes.io/projected/87493a10-28dd-468d-82df-d225543ffd0e-kube-api-access-gl9p6\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:13 crc kubenswrapper[4881]: I0126 12:59:13.958872 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18188e05-15cd-421d-9ad4-a68243fa2d84-kube-api-access-s4bsw" (OuterVolumeSpecName: "kube-api-access-s4bsw") pod "18188e05-15cd-421d-9ad4-a68243fa2d84" (UID: "18188e05-15cd-421d-9ad4-a68243fa2d84"). InnerVolumeSpecName "kube-api-access-s4bsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.001051 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69c8959d97-5f2wg"] Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.031275 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d944b8f4f-tgqh4"] Jan 26 12:59:14 crc kubenswrapper[4881]: E0126 12:59:14.031719 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18188e05-15cd-421d-9ad4-a68243fa2d84" containerName="mariadb-account-create-update" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.031742 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="18188e05-15cd-421d-9ad4-a68243fa2d84" containerName="mariadb-account-create-update" Jan 26 12:59:14 crc kubenswrapper[4881]: E0126 12:59:14.031760 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87493a10-28dd-468d-82df-d225543ffd0e" containerName="mariadb-database-create" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.031768 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="87493a10-28dd-468d-82df-d225543ffd0e" containerName="mariadb-database-create" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.032026 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="87493a10-28dd-468d-82df-d225543ffd0e" containerName="mariadb-database-create" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.032041 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="18188e05-15cd-421d-9ad4-a68243fa2d84" containerName="mariadb-account-create-update" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.033138 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.051576 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d944b8f4f-tgqh4"] Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.052250 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18188e05-15cd-421d-9ad4-a68243fa2d84-operator-scripts\") pod \"18188e05-15cd-421d-9ad4-a68243fa2d84\" (UID: \"18188e05-15cd-421d-9ad4-a68243fa2d84\") " Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.052668 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4bsw\" (UniqueName: \"kubernetes.io/projected/18188e05-15cd-421d-9ad4-a68243fa2d84-kube-api-access-s4bsw\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.053024 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18188e05-15cd-421d-9ad4-a68243fa2d84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18188e05-15cd-421d-9ad4-a68243fa2d84" (UID: "18188e05-15cd-421d-9ad4-a68243fa2d84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.061658 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.062319 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.098658 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ecf117-d780-4424-a2d7-8d30c280d0c3" path="/var/lib/kubelet/pods/57ecf117-d780-4424-a2d7-8d30c280d0c3/volumes" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.154584 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46cbf3cf-ede5-44fd-9897-799299dfdcf6-horizon-secret-key\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.154633 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-config-data\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.154667 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46cbf3cf-ede5-44fd-9897-799299dfdcf6-logs\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.154712 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84pgz\" (UniqueName: \"kubernetes.io/projected/46cbf3cf-ede5-44fd-9897-799299dfdcf6-kube-api-access-84pgz\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.154742 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-scripts\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.154885 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18188e05-15cd-421d-9ad4-a68243fa2d84-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.257189 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-config-data\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.257253 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46cbf3cf-ede5-44fd-9897-799299dfdcf6-logs\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.257320 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84pgz\" (UniqueName: \"kubernetes.io/projected/46cbf3cf-ede5-44fd-9897-799299dfdcf6-kube-api-access-84pgz\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.257346 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-scripts\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.257439 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46cbf3cf-ede5-44fd-9897-799299dfdcf6-horizon-secret-key\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.258190 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46cbf3cf-ede5-44fd-9897-799299dfdcf6-logs\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.262701 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-scripts\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.264367 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-config-data\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.274502 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46cbf3cf-ede5-44fd-9897-799299dfdcf6-horizon-secret-key\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.282132 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84pgz\" (UniqueName: \"kubernetes.io/projected/46cbf3cf-ede5-44fd-9897-799299dfdcf6-kube-api-access-84pgz\") pod \"horizon-5d944b8f4f-tgqh4\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.371416 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ddrsw" event={"ID":"87493a10-28dd-468d-82df-d225543ffd0e","Type":"ContainerDied","Data":"2e9d05f5ed6b961c4b9d4321fcd14c4956a04194e9371595e0de619c2ea2bd64"} Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.371465 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e9d05f5ed6b961c4b9d4321fcd14c4956a04194e9371595e0de619c2ea2bd64" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.371577 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ddrsw" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.382580 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ad3b-account-create-update-cf2rd" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.382613 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ad3b-account-create-update-cf2rd" event={"ID":"18188e05-15cd-421d-9ad4-a68243fa2d84","Type":"ContainerDied","Data":"6e38130e2a44b2648aaf0e6d5dfcf57d87abfb3fff53af8aa4752fbd434c4dcd"} Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.382658 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e38130e2a44b2648aaf0e6d5dfcf57d87abfb3fff53af8aa4752fbd434c4dcd" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.400548 4881 generic.go:334] "Generic (PLEG): container finished" podID="8395a76c-6569-43c6-ba18-438efdb98980" containerID="9e3689c99cb55c2a380e1c7d9d475b86b1b65c071da7533e6ea5cce0bb2bf71f" exitCode=0 Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.400640 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-445e-account-create-update-hgkn9" event={"ID":"8395a76c-6569-43c6-ba18-438efdb98980","Type":"ContainerDied","Data":"9e3689c99cb55c2a380e1c7d9d475b86b1b65c071da7533e6ea5cce0bb2bf71f"} Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.407509 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" event={"ID":"75b1a7cd-6382-458b-8769-7f212bd59bf9","Type":"ContainerStarted","Data":"585971640c82eeec51d4672a38687cebf4e71823e8caa145aebb5ea5be4ac0ac"} Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.407674 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.427580 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.441064 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" podStartSLOduration=4.441046621 podStartE2EDuration="4.441046621s" podCreationTimestamp="2026-01-26 12:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:14.435157148 +0000 UTC m=+1426.914467194" watchObservedRunningTime="2026-01-26 12:59:14.441046621 +0000 UTC m=+1426.920356647" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.650704 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.749052 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2sp7f" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.876260 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-operator-scripts\") pod \"8a3edaef-3ff0-45b1-b037-ca545d1bd9af\" (UID: \"8a3edaef-3ff0-45b1-b037-ca545d1bd9af\") " Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.876935 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvp6s\" (UniqueName: \"kubernetes.io/projected/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-kube-api-access-zvp6s\") pod \"8a3edaef-3ff0-45b1-b037-ca545d1bd9af\" (UID: \"8a3edaef-3ff0-45b1-b037-ca545d1bd9af\") " Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.877019 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a3edaef-3ff0-45b1-b037-ca545d1bd9af" (UID: "8a3edaef-3ff0-45b1-b037-ca545d1bd9af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.877427 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.882043 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-kube-api-access-zvp6s" (OuterVolumeSpecName: "kube-api-access-zvp6s") pod "8a3edaef-3ff0-45b1-b037-ca545d1bd9af" (UID: "8a3edaef-3ff0-45b1-b037-ca545d1bd9af"). InnerVolumeSpecName "kube-api-access-zvp6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.973173 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d944b8f4f-tgqh4"] Jan 26 12:59:14 crc kubenswrapper[4881]: I0126 12:59:14.979021 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvp6s\" (UniqueName: \"kubernetes.io/projected/8a3edaef-3ff0-45b1-b037-ca545d1bd9af-kube-api-access-zvp6s\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:14 crc kubenswrapper[4881]: W0126 12:59:14.987122 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46cbf3cf_ede5_44fd_9897_799299dfdcf6.slice/crio-13f472ee4f5c3825bbfd11b89b87ea8ead6c18463a13fed1283539cadde43504 WatchSource:0}: Error finding container 13f472ee4f5c3825bbfd11b89b87ea8ead6c18463a13fed1283539cadde43504: Status 404 returned error can't find the container with id 13f472ee4f5c3825bbfd11b89b87ea8ead6c18463a13fed1283539cadde43504 Jan 26 12:59:15 crc kubenswrapper[4881]: I0126 12:59:15.432834 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75a85372-d728-4770-8639-fb6f93e44dab","Type":"ContainerStarted","Data":"504430f782e111a755db2193f6f9f67548cd0d103b8ffbf9239634848e993d2d"} Jan 26 12:59:15 crc kubenswrapper[4881]: I0126 12:59:15.445868 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2sp7f" Jan 26 12:59:15 crc kubenswrapper[4881]: I0126 12:59:15.445867 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2sp7f" event={"ID":"8a3edaef-3ff0-45b1-b037-ca545d1bd9af","Type":"ContainerDied","Data":"7fed1d6050e69b79fc0209a6841dd9f00718ec7c10c4ae24f0b9ac5c53cbb3a2"} Jan 26 12:59:15 crc kubenswrapper[4881]: I0126 12:59:15.446022 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fed1d6050e69b79fc0209a6841dd9f00718ec7c10c4ae24f0b9ac5c53cbb3a2" Jan 26 12:59:15 crc kubenswrapper[4881]: I0126 12:59:15.447706 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d944b8f4f-tgqh4" event={"ID":"46cbf3cf-ede5-44fd-9897-799299dfdcf6","Type":"ContainerStarted","Data":"13f472ee4f5c3825bbfd11b89b87ea8ead6c18463a13fed1283539cadde43504"} Jan 26 12:59:15 crc kubenswrapper[4881]: I0126 12:59:15.973436 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-445e-account-create-update-hgkn9" Jan 26 12:59:16 crc kubenswrapper[4881]: I0126 12:59:16.104401 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvrxs\" (UniqueName: \"kubernetes.io/projected/8395a76c-6569-43c6-ba18-438efdb98980-kube-api-access-bvrxs\") pod \"8395a76c-6569-43c6-ba18-438efdb98980\" (UID: \"8395a76c-6569-43c6-ba18-438efdb98980\") " Jan 26 12:59:16 crc kubenswrapper[4881]: I0126 12:59:16.104468 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8395a76c-6569-43c6-ba18-438efdb98980-operator-scripts\") pod \"8395a76c-6569-43c6-ba18-438efdb98980\" (UID: \"8395a76c-6569-43c6-ba18-438efdb98980\") " Jan 26 12:59:16 crc kubenswrapper[4881]: I0126 12:59:16.105148 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8395a76c-6569-43c6-ba18-438efdb98980-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8395a76c-6569-43c6-ba18-438efdb98980" (UID: "8395a76c-6569-43c6-ba18-438efdb98980"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:16 crc kubenswrapper[4881]: I0126 12:59:16.110957 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8395a76c-6569-43c6-ba18-438efdb98980-kube-api-access-bvrxs" (OuterVolumeSpecName: "kube-api-access-bvrxs") pod "8395a76c-6569-43c6-ba18-438efdb98980" (UID: "8395a76c-6569-43c6-ba18-438efdb98980"). InnerVolumeSpecName "kube-api-access-bvrxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:16 crc kubenswrapper[4881]: I0126 12:59:16.206666 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvrxs\" (UniqueName: \"kubernetes.io/projected/8395a76c-6569-43c6-ba18-438efdb98980-kube-api-access-bvrxs\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:16 crc kubenswrapper[4881]: I0126 12:59:16.206700 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8395a76c-6569-43c6-ba18-438efdb98980-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:16 crc kubenswrapper[4881]: I0126 12:59:16.470843 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-445e-account-create-update-hgkn9" event={"ID":"8395a76c-6569-43c6-ba18-438efdb98980","Type":"ContainerDied","Data":"ce8ab695ccdba623a47792494e7de21e0ddbc6fecaad2a10ecd85883575457e5"} Jan 26 12:59:16 crc kubenswrapper[4881]: I0126 12:59:16.470890 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8ab695ccdba623a47792494e7de21e0ddbc6fecaad2a10ecd85883575457e5" Jan 26 12:59:16 crc kubenswrapper[4881]: I0126 12:59:16.470899 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-445e-account-create-update-hgkn9" Jan 26 12:59:18 crc kubenswrapper[4881]: I0126 12:59:18.498738 4881 generic.go:334] "Generic (PLEG): container finished" podID="450091ac-d618-40d8-9f54-7fb0e02bb9d0" containerID="216a2a795a5284e3fbc1a9d169045b67d442a3e7252b4aec3690c2243f4dc0d9" exitCode=0 Jan 26 12:59:18 crc kubenswrapper[4881]: I0126 12:59:18.498821 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s257b" event={"ID":"450091ac-d618-40d8-9f54-7fb0e02bb9d0","Type":"ContainerDied","Data":"216a2a795a5284e3fbc1a9d169045b67d442a3e7252b4aec3690c2243f4dc0d9"} Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.218585 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nmggw"] Jan 26 12:59:19 crc kubenswrapper[4881]: E0126 12:59:19.219116 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3edaef-3ff0-45b1-b037-ca545d1bd9af" containerName="mariadb-database-create" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.219133 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3edaef-3ff0-45b1-b037-ca545d1bd9af" containerName="mariadb-database-create" Jan 26 12:59:19 crc kubenswrapper[4881]: E0126 12:59:19.219145 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8395a76c-6569-43c6-ba18-438efdb98980" containerName="mariadb-account-create-update" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.219151 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8395a76c-6569-43c6-ba18-438efdb98980" containerName="mariadb-account-create-update" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.219319 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="8395a76c-6569-43c6-ba18-438efdb98980" containerName="mariadb-account-create-update" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.219338 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a3edaef-3ff0-45b1-b037-ca545d1bd9af" containerName="mariadb-database-create" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.220964 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.225075 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.225298 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jjhvj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.232489 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nmggw"] Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.371067 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-combined-ca-bundle\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.371176 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-db-sync-config-data\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.371240 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-config-data\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.371268 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52rx4\" (UniqueName: \"kubernetes.io/projected/cf634913-5017-4a94-a3e7-0c337bb9fb4d-kube-api-access-52rx4\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.476772 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-db-sync-config-data\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.476853 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-config-data\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.476883 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52rx4\" (UniqueName: \"kubernetes.io/projected/cf634913-5017-4a94-a3e7-0c337bb9fb4d-kube-api-access-52rx4\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.476951 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-combined-ca-bundle\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.483089 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-db-sync-config-data\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.486406 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-combined-ca-bundle\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.486960 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-config-data\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.508495 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52rx4\" (UniqueName: \"kubernetes.io/projected/cf634913-5017-4a94-a3e7-0c337bb9fb4d-kube-api-access-52rx4\") pod \"glance-db-sync-nmggw\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.511444 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d58d86989-8cl5p"] Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.544446 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmggw" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.579417 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hmsfj"] Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.587312 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.592029 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.592339 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.593420 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c9mh4" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.595153 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c67646cfd-kppgm"] Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.597766 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.599468 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.613623 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hmsfj"] Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.620954 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c67646cfd-kppgm"] Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.680980 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-secret-key\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.681036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d87dr\" (UniqueName: \"kubernetes.io/projected/6b0cbe35-c0c9-4483-866a-eddf1fdced26-kube-api-access-d87dr\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.681086 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-tls-certs\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.681105 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-scripts\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.681133 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-config-data\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.681205 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b0cbe35-c0c9-4483-866a-eddf1fdced26-logs\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.681239 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-combined-ca-bundle\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.681285 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5s7p\" (UniqueName: \"kubernetes.io/projected/a410393d-b0c5-45bf-b9f7-897ad16759d4-kube-api-access-n5s7p\") pod \"neutron-db-sync-hmsfj\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.681306 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-config\") pod \"neutron-db-sync-hmsfj\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.681329 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-combined-ca-bundle\") pod \"neutron-db-sync-hmsfj\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.730492 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d944b8f4f-tgqh4"] Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.740624 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bf7cc86f8-h94sx"] Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.742158 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.749642 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bf7cc86f8-h94sx"] Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784283 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-combined-ca-bundle\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784343 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5s7p\" (UniqueName: \"kubernetes.io/projected/a410393d-b0c5-45bf-b9f7-897ad16759d4-kube-api-access-n5s7p\") pod \"neutron-db-sync-hmsfj\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784365 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-config\") pod \"neutron-db-sync-hmsfj\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784388 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-combined-ca-bundle\") pod \"neutron-db-sync-hmsfj\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784424 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-secret-key\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784449 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d87dr\" (UniqueName: \"kubernetes.io/projected/6b0cbe35-c0c9-4483-866a-eddf1fdced26-kube-api-access-d87dr\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-scripts\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784504 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-tls-certs\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784540 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-config-data\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784607 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b0cbe35-c0c9-4483-866a-eddf1fdced26-logs\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.784977 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b0cbe35-c0c9-4483-866a-eddf1fdced26-logs\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.786441 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-scripts\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.788072 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-config-data\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.790855 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-combined-ca-bundle\") pod \"neutron-db-sync-hmsfj\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.791266 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-secret-key\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.791453 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-combined-ca-bundle\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.791832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-config\") pod \"neutron-db-sync-hmsfj\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.793997 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-tls-certs\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.802405 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d87dr\" (UniqueName: \"kubernetes.io/projected/6b0cbe35-c0c9-4483-866a-eddf1fdced26-kube-api-access-d87dr\") pod \"horizon-5c67646cfd-kppgm\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.804976 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5s7p\" (UniqueName: \"kubernetes.io/projected/a410393d-b0c5-45bf-b9f7-897ad16759d4-kube-api-access-n5s7p\") pod \"neutron-db-sync-hmsfj\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.898462 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92e11d67-ecbe-4005-849c-40a16f3d3faa-logs\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.898531 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226cb\" (UniqueName: \"kubernetes.io/projected/92e11d67-ecbe-4005-849c-40a16f3d3faa-kube-api-access-226cb\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.898559 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92e11d67-ecbe-4005-849c-40a16f3d3faa-horizon-secret-key\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.898643 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92e11d67-ecbe-4005-849c-40a16f3d3faa-config-data\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.898685 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e11d67-ecbe-4005-849c-40a16f3d3faa-horizon-tls-certs\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.898717 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e11d67-ecbe-4005-849c-40a16f3d3faa-combined-ca-bundle\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.898735 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92e11d67-ecbe-4005-849c-40a16f3d3faa-scripts\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.918017 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hmsfj" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.939470 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:19 crc kubenswrapper[4881]: I0126 12:59:19.950898 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.000712 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92e11d67-ecbe-4005-849c-40a16f3d3faa-scripts\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.000820 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92e11d67-ecbe-4005-849c-40a16f3d3faa-logs\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.000843 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-226cb\" (UniqueName: \"kubernetes.io/projected/92e11d67-ecbe-4005-849c-40a16f3d3faa-kube-api-access-226cb\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.000867 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92e11d67-ecbe-4005-849c-40a16f3d3faa-horizon-secret-key\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.000919 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92e11d67-ecbe-4005-849c-40a16f3d3faa-config-data\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.000946 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e11d67-ecbe-4005-849c-40a16f3d3faa-horizon-tls-certs\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.000974 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e11d67-ecbe-4005-849c-40a16f3d3faa-combined-ca-bundle\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.002082 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92e11d67-ecbe-4005-849c-40a16f3d3faa-scripts\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.002356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92e11d67-ecbe-4005-849c-40a16f3d3faa-logs\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.002505 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92e11d67-ecbe-4005-849c-40a16f3d3faa-config-data\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.006818 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e11d67-ecbe-4005-849c-40a16f3d3faa-horizon-tls-certs\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.008986 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92e11d67-ecbe-4005-849c-40a16f3d3faa-horizon-secret-key\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.014328 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e11d67-ecbe-4005-849c-40a16f3d3faa-combined-ca-bundle\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.018057 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-226cb\" (UniqueName: \"kubernetes.io/projected/92e11d67-ecbe-4005-849c-40a16f3d3faa-kube-api-access-226cb\") pod \"horizon-7bf7cc86f8-h94sx\" (UID: \"92e11d67-ecbe-4005-849c-40a16f3d3faa\") " pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.064945 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.102099 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd27l\" (UniqueName: \"kubernetes.io/projected/450091ac-d618-40d8-9f54-7fb0e02bb9d0-kube-api-access-kd27l\") pod \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.102177 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-credential-keys\") pod \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.102202 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-fernet-keys\") pod \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.102230 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-config-data\") pod \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.102311 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-combined-ca-bundle\") pod \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.102339 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-scripts\") pod \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\" (UID: \"450091ac-d618-40d8-9f54-7fb0e02bb9d0\") " Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.106855 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "450091ac-d618-40d8-9f54-7fb0e02bb9d0" (UID: "450091ac-d618-40d8-9f54-7fb0e02bb9d0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.109047 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "450091ac-d618-40d8-9f54-7fb0e02bb9d0" (UID: "450091ac-d618-40d8-9f54-7fb0e02bb9d0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.109114 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450091ac-d618-40d8-9f54-7fb0e02bb9d0-kube-api-access-kd27l" (OuterVolumeSpecName: "kube-api-access-kd27l") pod "450091ac-d618-40d8-9f54-7fb0e02bb9d0" (UID: "450091ac-d618-40d8-9f54-7fb0e02bb9d0"). InnerVolumeSpecName "kube-api-access-kd27l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.110288 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-scripts" (OuterVolumeSpecName: "scripts") pod "450091ac-d618-40d8-9f54-7fb0e02bb9d0" (UID: "450091ac-d618-40d8-9f54-7fb0e02bb9d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.132781 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-config-data" (OuterVolumeSpecName: "config-data") pod "450091ac-d618-40d8-9f54-7fb0e02bb9d0" (UID: "450091ac-d618-40d8-9f54-7fb0e02bb9d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.137091 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "450091ac-d618-40d8-9f54-7fb0e02bb9d0" (UID: "450091ac-d618-40d8-9f54-7fb0e02bb9d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.204411 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd27l\" (UniqueName: \"kubernetes.io/projected/450091ac-d618-40d8-9f54-7fb0e02bb9d0-kube-api-access-kd27l\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.204438 4881 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.204447 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.204457 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.204466 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.204474 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/450091ac-d618-40d8-9f54-7fb0e02bb9d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.547889 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s257b" event={"ID":"450091ac-d618-40d8-9f54-7fb0e02bb9d0","Type":"ContainerDied","Data":"e170f7e2a1e77fa45d97ed2c710fc1d53db3cd4d27330b507e53054516c62a29"} Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.547942 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e170f7e2a1e77fa45d97ed2c710fc1d53db3cd4d27330b507e53054516c62a29" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.547963 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s257b" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.602291 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-s257b"] Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.615603 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-s257b"] Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.692376 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wmvz7"] Jan 26 12:59:20 crc kubenswrapper[4881]: E0126 12:59:20.692768 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450091ac-d618-40d8-9f54-7fb0e02bb9d0" containerName="keystone-bootstrap" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.692786 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="450091ac-d618-40d8-9f54-7fb0e02bb9d0" containerName="keystone-bootstrap" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.692969 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="450091ac-d618-40d8-9f54-7fb0e02bb9d0" containerName="keystone-bootstrap" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.693569 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.697106 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.697130 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p4c6x" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.697165 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.697336 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.698908 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.706017 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wmvz7"] Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.816559 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-combined-ca-bundle\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.816666 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf7mz\" (UniqueName: \"kubernetes.io/projected/ce1a489b-0795-4817-ad32-7fdf1ea68559-kube-api-access-pf7mz\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.816700 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-fernet-keys\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.816788 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-scripts\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.816820 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-config-data\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.816844 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-credential-keys\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.918767 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf7mz\" (UniqueName: \"kubernetes.io/projected/ce1a489b-0795-4817-ad32-7fdf1ea68559-kube-api-access-pf7mz\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.918901 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-fernet-keys\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.918977 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-scripts\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.919002 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-config-data\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.919030 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-credential-keys\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.919126 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-combined-ca-bundle\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.923202 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-credential-keys\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.924138 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-config-data\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.926364 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-fernet-keys\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.939285 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-combined-ca-bundle\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.939503 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-scripts\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:20 crc kubenswrapper[4881]: I0126 12:59:20.943412 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf7mz\" (UniqueName: \"kubernetes.io/projected/ce1a489b-0795-4817-ad32-7fdf1ea68559-kube-api-access-pf7mz\") pod \"keystone-bootstrap-wmvz7\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:21 crc kubenswrapper[4881]: I0126 12:59:21.063261 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wmvz7" Jan 26 12:59:21 crc kubenswrapper[4881]: I0126 12:59:21.284665 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 12:59:21 crc kubenswrapper[4881]: I0126 12:59:21.346939 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-smrcs"] Jan 26 12:59:21 crc kubenswrapper[4881]: I0126 12:59:21.349028 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerName="dnsmasq-dns" containerID="cri-o://e2c6cb6b4e269432b3aa2a4e73ac2efd12af9da145fe3ce4731ea908064179fc" gracePeriod=10 Jan 26 12:59:21 crc kubenswrapper[4881]: I0126 12:59:21.559479 4881 generic.go:334] "Generic (PLEG): container finished" podID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerID="e2c6cb6b4e269432b3aa2a4e73ac2efd12af9da145fe3ce4731ea908064179fc" exitCode=0 Jan 26 12:59:21 crc kubenswrapper[4881]: I0126 12:59:21.559548 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" event={"ID":"c3adacf9-6f24-44e9-a3f7-03082a2159fd","Type":"ContainerDied","Data":"e2c6cb6b4e269432b3aa2a4e73ac2efd12af9da145fe3ce4731ea908064179fc"} Jan 26 12:59:22 crc kubenswrapper[4881]: I0126 12:59:22.094290 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450091ac-d618-40d8-9f54-7fb0e02bb9d0" path="/var/lib/kubelet/pods/450091ac-d618-40d8-9f54-7fb0e02bb9d0/volumes" Jan 26 12:59:24 crc kubenswrapper[4881]: I0126 12:59:24.789919 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:59:24 crc kubenswrapper[4881]: I0126 12:59:24.790823 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:59:29 crc kubenswrapper[4881]: I0126 12:59:29.815892 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Jan 26 12:59:34 crc kubenswrapper[4881]: I0126 12:59:34.817696 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Jan 26 12:59:36 crc kubenswrapper[4881]: E0126 12:59:36.470159 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 12:59:36 crc kubenswrapper[4881]: E0126 12:59:36.470541 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 12:59:36 crc kubenswrapper[4881]: E0126 12:59:36.470699 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nddhb4h687hd4h5c8h598h685h65bh664h6bh58h4h544h5cbh69h5d5h5b8h687h74h5b7h58dh598h679h58fh647h5bbh64h77h596h684h66fhc4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzzrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6d58d86989-8cl5p_openstack(7b81cab4-a1fe-438e-ae0e-c23a58cc16e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:59:36 crc kubenswrapper[4881]: E0126 12:59:36.473738 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-6d58d86989-8cl5p" podUID="7b81cab4-a1fe-438e-ae0e-c23a58cc16e7" Jan 26 12:59:39 crc kubenswrapper[4881]: I0126 12:59:39.819476 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Jan 26 12:59:39 crc kubenswrapper[4881]: I0126 12:59:39.820377 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:59:39 crc kubenswrapper[4881]: E0126 12:59:39.943333 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 12:59:39 crc kubenswrapper[4881]: E0126 12:59:39.943383 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 12:59:39 crc kubenswrapper[4881]: E0126 12:59:39.943551 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9bh5c4hdh5ddh86h66ch9dh6dh694h5f9h98h589h649h68h9bh97h7fh596h67h66dh5bh5bdh78h56bh659h86h686h58fh9bh55ch5f8h9bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84pgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5d944b8f4f-tgqh4_openstack(46cbf3cf-ede5-44fd-9897-799299dfdcf6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:59:39 crc kubenswrapper[4881]: E0126 12:59:39.945694 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 12:59:39 crc kubenswrapper[4881]: E0126 12:59:39.945720 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 12:59:39 crc kubenswrapper[4881]: E0126 12:59:39.945800 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd5hcch565hfbh595h679h65fh598hd6h59chf6h98hfdh95h577h659h677h59h595h576h5cfhdch8bhdhf9h56ch589h644h56h55dh544h697q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9s6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-69c8959d97-5f2wg_openstack(5d2634ed-f529-43dc-8a08-54f97ace0d73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:59:39 crc kubenswrapper[4881]: E0126 12:59:39.948632 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-5d944b8f4f-tgqh4" podUID="46cbf3cf-ede5-44fd-9897-799299dfdcf6" Jan 26 12:59:39 crc kubenswrapper[4881]: E0126 12:59:39.948780 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-69c8959d97-5f2wg" podUID="5d2634ed-f529-43dc-8a08-54f97ace0d73" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.061578 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.155406 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-swift-storage-0\") pod \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.155490 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-sb\") pod \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.156262 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xppx9\" (UniqueName: \"kubernetes.io/projected/c3adacf9-6f24-44e9-a3f7-03082a2159fd-kube-api-access-xppx9\") pod \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.156346 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-nb\") pod \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.156444 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-config\") pod \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.156472 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-svc\") pod \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\" (UID: \"c3adacf9-6f24-44e9-a3f7-03082a2159fd\") " Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.170749 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3adacf9-6f24-44e9-a3f7-03082a2159fd-kube-api-access-xppx9" (OuterVolumeSpecName: "kube-api-access-xppx9") pod "c3adacf9-6f24-44e9-a3f7-03082a2159fd" (UID: "c3adacf9-6f24-44e9-a3f7-03082a2159fd"). InnerVolumeSpecName "kube-api-access-xppx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.201925 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3adacf9-6f24-44e9-a3f7-03082a2159fd" (UID: "c3adacf9-6f24-44e9-a3f7-03082a2159fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.201932 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3adacf9-6f24-44e9-a3f7-03082a2159fd" (UID: "c3adacf9-6f24-44e9-a3f7-03082a2159fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.215150 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3adacf9-6f24-44e9-a3f7-03082a2159fd" (UID: "c3adacf9-6f24-44e9-a3f7-03082a2159fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.219547 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-config" (OuterVolumeSpecName: "config") pod "c3adacf9-6f24-44e9-a3f7-03082a2159fd" (UID: "c3adacf9-6f24-44e9-a3f7-03082a2159fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.235496 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3adacf9-6f24-44e9-a3f7-03082a2159fd" (UID: "c3adacf9-6f24-44e9-a3f7-03082a2159fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.258922 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.258953 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.258964 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.258972 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.258981 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3adacf9-6f24-44e9-a3f7-03082a2159fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.258989 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xppx9\" (UniqueName: \"kubernetes.io/projected/c3adacf9-6f24-44e9-a3f7-03082a2159fd-kube-api-access-xppx9\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.818430 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.822504 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" event={"ID":"c3adacf9-6f24-44e9-a3f7-03082a2159fd","Type":"ContainerDied","Data":"178a1aa52049d490578b7f7ba98ec625d5199960ab9b40e900a7b644672c70a5"} Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.822592 4881 scope.go:117] "RemoveContainer" containerID="e2c6cb6b4e269432b3aa2a4e73ac2efd12af9da145fe3ce4731ea908064179fc" Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.928637 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-smrcs"] Jan 26 12:59:40 crc kubenswrapper[4881]: I0126 12:59:40.992016 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-smrcs"] Jan 26 12:59:42 crc kubenswrapper[4881]: I0126 12:59:42.100968 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" path="/var/lib/kubelet/pods/c3adacf9-6f24-44e9-a3f7-03082a2159fd/volumes" Jan 26 12:59:44 crc kubenswrapper[4881]: I0126 12:59:44.820586 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757cc9679f-smrcs" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Jan 26 12:59:49 crc kubenswrapper[4881]: E0126 12:59:49.088895 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 26 12:59:49 crc kubenswrapper[4881]: E0126 12:59:49.089225 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 26 12:59:49 crc kubenswrapper[4881]: E0126 12:59:49.089380 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.23:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n547hd7h689hfdh59h695h99h564hfch656h5b4hcfh56bh689h6fh57bh66dhf9hb7h696h5b6hb9h95h665h596hffh57dhc5h66fhcbh558h665q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbqg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(75a85372-d728-4770-8639-fb6f93e44dab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.175186 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.263373 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-logs\") pod \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.263503 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-horizon-secret-key\") pod \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.264226 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-logs" (OuterVolumeSpecName: "logs") pod "7b81cab4-a1fe-438e-ae0e-c23a58cc16e7" (UID: "7b81cab4-a1fe-438e-ae0e-c23a58cc16e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.268961 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7b81cab4-a1fe-438e-ae0e-c23a58cc16e7" (UID: "7b81cab4-a1fe-438e-ae0e-c23a58cc16e7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.364709 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzzrh\" (UniqueName: \"kubernetes.io/projected/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-kube-api-access-bzzrh\") pod \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.364764 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-scripts\") pod \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.364833 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-config-data\") pod \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\" (UID: \"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.365373 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-logs\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.365391 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.366106 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-scripts" (OuterVolumeSpecName: "scripts") pod "7b81cab4-a1fe-438e-ae0e-c23a58cc16e7" (UID: "7b81cab4-a1fe-438e-ae0e-c23a58cc16e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.366270 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-config-data" (OuterVolumeSpecName: "config-data") pod "7b81cab4-a1fe-438e-ae0e-c23a58cc16e7" (UID: "7b81cab4-a1fe-438e-ae0e-c23a58cc16e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.369214 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-kube-api-access-bzzrh" (OuterVolumeSpecName: "kube-api-access-bzzrh") pod "7b81cab4-a1fe-438e-ae0e-c23a58cc16e7" (UID: "7b81cab4-a1fe-438e-ae0e-c23a58cc16e7"). InnerVolumeSpecName "kube-api-access-bzzrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.466988 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.467034 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.467046 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzzrh\" (UniqueName: \"kubernetes.io/projected/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7-kube-api-access-bzzrh\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: E0126 12:59:49.595584 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 26 12:59:49 crc kubenswrapper[4881]: E0126 12:59:49.595641 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 26 12:59:49 crc kubenswrapper[4881]: E0126 12:59:49.595768 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.23:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2prlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-5pg9p_openstack(adf01549-e1d0-46a7-a141-bdc0f5c81458): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:59:49 crc kubenswrapper[4881]: E0126 12:59:49.596983 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-5pg9p" podUID="adf01549-e1d0-46a7-a141-bdc0f5c81458" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.655217 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.660891 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.771612 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d2634ed-f529-43dc-8a08-54f97ace0d73-logs\") pod \"5d2634ed-f529-43dc-8a08-54f97ace0d73\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.771929 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46cbf3cf-ede5-44fd-9897-799299dfdcf6-horizon-secret-key\") pod \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.772008 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-scripts\") pod \"5d2634ed-f529-43dc-8a08-54f97ace0d73\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.772037 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-config-data\") pod \"5d2634ed-f529-43dc-8a08-54f97ace0d73\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.772103 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d2634ed-f529-43dc-8a08-54f97ace0d73-horizon-secret-key\") pod \"5d2634ed-f529-43dc-8a08-54f97ace0d73\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.772130 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46cbf3cf-ede5-44fd-9897-799299dfdcf6-logs\") pod \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.772199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-scripts\") pod \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.772227 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-config-data\") pod \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.772279 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84pgz\" (UniqueName: \"kubernetes.io/projected/46cbf3cf-ede5-44fd-9897-799299dfdcf6-kube-api-access-84pgz\") pod \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\" (UID: \"46cbf3cf-ede5-44fd-9897-799299dfdcf6\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.772374 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9s6r\" (UniqueName: \"kubernetes.io/projected/5d2634ed-f529-43dc-8a08-54f97ace0d73-kube-api-access-s9s6r\") pod \"5d2634ed-f529-43dc-8a08-54f97ace0d73\" (UID: \"5d2634ed-f529-43dc-8a08-54f97ace0d73\") " Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.773647 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d2634ed-f529-43dc-8a08-54f97ace0d73-logs" (OuterVolumeSpecName: "logs") pod "5d2634ed-f529-43dc-8a08-54f97ace0d73" (UID: "5d2634ed-f529-43dc-8a08-54f97ace0d73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.774192 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46cbf3cf-ede5-44fd-9897-799299dfdcf6-logs" (OuterVolumeSpecName: "logs") pod "46cbf3cf-ede5-44fd-9897-799299dfdcf6" (UID: "46cbf3cf-ede5-44fd-9897-799299dfdcf6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.774691 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-scripts" (OuterVolumeSpecName: "scripts") pod "46cbf3cf-ede5-44fd-9897-799299dfdcf6" (UID: "46cbf3cf-ede5-44fd-9897-799299dfdcf6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.775035 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-scripts" (OuterVolumeSpecName: "scripts") pod "5d2634ed-f529-43dc-8a08-54f97ace0d73" (UID: "5d2634ed-f529-43dc-8a08-54f97ace0d73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.775137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-config-data" (OuterVolumeSpecName: "config-data") pod "46cbf3cf-ede5-44fd-9897-799299dfdcf6" (UID: "46cbf3cf-ede5-44fd-9897-799299dfdcf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.774737 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-config-data" (OuterVolumeSpecName: "config-data") pod "5d2634ed-f529-43dc-8a08-54f97ace0d73" (UID: "5d2634ed-f529-43dc-8a08-54f97ace0d73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.777663 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46cbf3cf-ede5-44fd-9897-799299dfdcf6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "46cbf3cf-ede5-44fd-9897-799299dfdcf6" (UID: "46cbf3cf-ede5-44fd-9897-799299dfdcf6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.777789 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d2634ed-f529-43dc-8a08-54f97ace0d73-kube-api-access-s9s6r" (OuterVolumeSpecName: "kube-api-access-s9s6r") pod "5d2634ed-f529-43dc-8a08-54f97ace0d73" (UID: "5d2634ed-f529-43dc-8a08-54f97ace0d73"). InnerVolumeSpecName "kube-api-access-s9s6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.778605 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46cbf3cf-ede5-44fd-9897-799299dfdcf6-kube-api-access-84pgz" (OuterVolumeSpecName: "kube-api-access-84pgz") pod "46cbf3cf-ede5-44fd-9897-799299dfdcf6" (UID: "46cbf3cf-ede5-44fd-9897-799299dfdcf6"). InnerVolumeSpecName "kube-api-access-84pgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.778615 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d2634ed-f529-43dc-8a08-54f97ace0d73-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5d2634ed-f529-43dc-8a08-54f97ace0d73" (UID: "5d2634ed-f529-43dc-8a08-54f97ace0d73"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.874140 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9s6r\" (UniqueName: \"kubernetes.io/projected/5d2634ed-f529-43dc-8a08-54f97ace0d73-kube-api-access-s9s6r\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.874175 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d2634ed-f529-43dc-8a08-54f97ace0d73-logs\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.874188 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46cbf3cf-ede5-44fd-9897-799299dfdcf6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.874201 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.874210 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d2634ed-f529-43dc-8a08-54f97ace0d73-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.874222 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d2634ed-f529-43dc-8a08-54f97ace0d73-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.874233 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46cbf3cf-ede5-44fd-9897-799299dfdcf6-logs\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.874242 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.874283 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46cbf3cf-ede5-44fd-9897-799299dfdcf6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.874294 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84pgz\" (UniqueName: \"kubernetes.io/projected/46cbf3cf-ede5-44fd-9897-799299dfdcf6-kube-api-access-84pgz\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.911735 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d58d86989-8cl5p" event={"ID":"7b81cab4-a1fe-438e-ae0e-c23a58cc16e7","Type":"ContainerDied","Data":"6a8fa892f7e5458b1232df3805018dca7db3ecebb6c71bd35c212a843bd8c641"} Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.911751 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d58d86989-8cl5p" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.913243 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d944b8f4f-tgqh4" event={"ID":"46cbf3cf-ede5-44fd-9897-799299dfdcf6","Type":"ContainerDied","Data":"13f472ee4f5c3825bbfd11b89b87ea8ead6c18463a13fed1283539cadde43504"} Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.913243 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d944b8f4f-tgqh4" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.917026 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c8959d97-5f2wg" Jan 26 12:59:49 crc kubenswrapper[4881]: I0126 12:59:49.917164 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c8959d97-5f2wg" event={"ID":"5d2634ed-f529-43dc-8a08-54f97ace0d73","Type":"ContainerDied","Data":"6935cd15743f6b8bf9aa47add14f8ee2bb949a8b58187a2e49a0d5d68bcf2ffc"} Jan 26 12:59:49 crc kubenswrapper[4881]: E0126 12:59:49.918834 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-5pg9p" podUID="adf01549-e1d0-46a7-a141-bdc0f5c81458" Jan 26 12:59:50 crc kubenswrapper[4881]: I0126 12:59:50.017687 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d944b8f4f-tgqh4"] Jan 26 12:59:50 crc kubenswrapper[4881]: I0126 12:59:50.048102 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5d944b8f4f-tgqh4"] Jan 26 12:59:50 crc kubenswrapper[4881]: I0126 12:59:50.076979 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d58d86989-8cl5p"] Jan 26 12:59:50 crc kubenswrapper[4881]: I0126 12:59:50.094103 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46cbf3cf-ede5-44fd-9897-799299dfdcf6" path="/var/lib/kubelet/pods/46cbf3cf-ede5-44fd-9897-799299dfdcf6/volumes" Jan 26 12:59:50 crc kubenswrapper[4881]: I0126 12:59:50.094652 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d58d86989-8cl5p"] Jan 26 12:59:50 crc kubenswrapper[4881]: I0126 12:59:50.120870 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69c8959d97-5f2wg"] Jan 26 12:59:50 crc kubenswrapper[4881]: I0126 12:59:50.128329 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69c8959d97-5f2wg"] Jan 26 12:59:50 crc kubenswrapper[4881]: I0126 12:59:50.774902 4881 scope.go:117] "RemoveContainer" containerID="cb6ff48452ddc510a6d0103310f496fdab02d2226d184214408ef16096f16a93" Jan 26 12:59:50 crc kubenswrapper[4881]: E0126 12:59:50.775630 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 26 12:59:50 crc kubenswrapper[4881]: E0126 12:59:50.775680 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 26 12:59:50 crc kubenswrapper[4881]: E0126 12:59:50.775863 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.23:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hvtrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-lgk7s_openstack(132298e2-a2f4-4311-9f7a-3e4e08abe34b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 12:59:50 crc kubenswrapper[4881]: E0126 12:59:50.777285 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-lgk7s" podUID="132298e2-a2f4-4311-9f7a-3e4e08abe34b" Jan 26 12:59:50 crc kubenswrapper[4881]: E0126 12:59:50.944548 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-lgk7s" podUID="132298e2-a2f4-4311-9f7a-3e4e08abe34b" Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.257367 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hmsfj"] Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.309776 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bf7cc86f8-h94sx"] Jan 26 12:59:51 crc kubenswrapper[4881]: W0126 12:59:51.317058 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92e11d67_ecbe_4005_849c_40a16f3d3faa.slice/crio-469af244645ee7336440084673ad3fce632a99420687a598899b9fbf6028e398 WatchSource:0}: Error finding container 469af244645ee7336440084673ad3fce632a99420687a598899b9fbf6028e398: Status 404 returned error can't find the container with id 469af244645ee7336440084673ad3fce632a99420687a598899b9fbf6028e398 Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.366588 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wmvz7"] Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.376049 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c67646cfd-kppgm"] Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.423591 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.502449 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nmggw"] Jan 26 12:59:51 crc kubenswrapper[4881]: W0126 12:59:51.504570 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf634913_5017_4a94_a3e7_0c337bb9fb4d.slice/crio-d60e5c74057cff4ee905dda55d676a653c80700c5387ae3805159402bcc5cd54 WatchSource:0}: Error finding container d60e5c74057cff4ee905dda55d676a653c80700c5387ae3805159402bcc5cd54: Status 404 returned error can't find the container with id d60e5c74057cff4ee905dda55d676a653c80700c5387ae3805159402bcc5cd54 Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.951240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wmvz7" event={"ID":"ce1a489b-0795-4817-ad32-7fdf1ea68559","Type":"ContainerStarted","Data":"22a5fcab15992d9e4368d136726c23e8a26a70e1fe25bd5a0a19b101d471cdff"} Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.951294 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wmvz7" event={"ID":"ce1a489b-0795-4817-ad32-7fdf1ea68559","Type":"ContainerStarted","Data":"5f3d296d91a3a45e5e8505721d3a1e34051a54d210cacbd3b6a3fae98bfc7b59"} Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.954074 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hmsfj" event={"ID":"a410393d-b0c5-45bf-b9f7-897ad16759d4","Type":"ContainerStarted","Data":"575479e07039a478f0162256f3814334444e55407e26307b5894052039080d79"} Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.954114 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hmsfj" event={"ID":"a410393d-b0c5-45bf-b9f7-897ad16759d4","Type":"ContainerStarted","Data":"50bd311ce112bd808a0845fe5808398be542a8c52d271a140c20a298477d5526"} Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.965549 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-t85k4" event={"ID":"d0fc8471-ad65-44cf-bf03-1c037aafdf11","Type":"ContainerStarted","Data":"1571eab50c624a5e28dcac81856d0e028f5740073bc1caf0ff053add137d39b6"} Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.968689 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67646cfd-kppgm" event={"ID":"6b0cbe35-c0c9-4483-866a-eddf1fdced26","Type":"ContainerStarted","Data":"11becbe66e90b27a1d407833f793333eb538d4d8b813396d1a62681ea0806353"} Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.968752 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67646cfd-kppgm" event={"ID":"6b0cbe35-c0c9-4483-866a-eddf1fdced26","Type":"ContainerStarted","Data":"785ba06ccaceb90529fed9d4bfe616ece3f2212b557578bbd30e62ec93faabac"} Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.968764 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67646cfd-kppgm" event={"ID":"6b0cbe35-c0c9-4483-866a-eddf1fdced26","Type":"ContainerStarted","Data":"963091c9cc5b00a2295a02f06b5dea81bb771527d4a2d572fc6637569b900d59"} Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.994211 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf7cc86f8-h94sx" event={"ID":"92e11d67-ecbe-4005-849c-40a16f3d3faa","Type":"ContainerStarted","Data":"1a9c3da19ec5c6fbc405f925e4adec97dc74dfc66c32615a49fa2652276229ec"} Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.994287 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf7cc86f8-h94sx" event={"ID":"92e11d67-ecbe-4005-849c-40a16f3d3faa","Type":"ContainerStarted","Data":"3de797f4dac9c5c639696037129a900787c29c8afa53199d23cd292de571682e"} Jan 26 12:59:51 crc kubenswrapper[4881]: I0126 12:59:51.994302 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf7cc86f8-h94sx" event={"ID":"92e11d67-ecbe-4005-849c-40a16f3d3faa","Type":"ContainerStarted","Data":"469af244645ee7336440084673ad3fce632a99420687a598899b9fbf6028e398"} Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:51.997577 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmggw" event={"ID":"cf634913-5017-4a94-a3e7-0c337bb9fb4d","Type":"ContainerStarted","Data":"d60e5c74057cff4ee905dda55d676a653c80700c5387ae3805159402bcc5cd54"} Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:51.998487 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wmvz7" podStartSLOduration=31.998460963 podStartE2EDuration="31.998460963s" podCreationTimestamp="2026-01-26 12:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:51.97494158 +0000 UTC m=+1464.454251606" watchObservedRunningTime="2026-01-26 12:59:51.998460963 +0000 UTC m=+1464.477770989" Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:52.001555 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqpzh" event={"ID":"741cf6ae-617a-440b-b6ec-63dc4e87ff4a","Type":"ContainerStarted","Data":"41785da4333665e17bd7fbffdd5da2933857073e2ad8332d4ec8db10278f0fac"} Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:52.006384 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hmsfj" podStartSLOduration=33.006358495 podStartE2EDuration="33.006358495s" podCreationTimestamp="2026-01-26 12:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:51.994479226 +0000 UTC m=+1464.473789252" watchObservedRunningTime="2026-01-26 12:59:52.006358495 +0000 UTC m=+1464.485668521" Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:52.007664 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75a85372-d728-4770-8639-fb6f93e44dab","Type":"ContainerStarted","Data":"d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7"} Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:52.029939 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-t85k4" podStartSLOduration=5.234221783 podStartE2EDuration="42.02992607s" podCreationTimestamp="2026-01-26 12:59:10 +0000 UTC" firstStartedPulling="2026-01-26 12:59:12.294342768 +0000 UTC m=+1424.773652804" lastFinishedPulling="2026-01-26 12:59:49.090047065 +0000 UTC m=+1461.569357091" observedRunningTime="2026-01-26 12:59:52.014928535 +0000 UTC m=+1464.494238561" watchObservedRunningTime="2026-01-26 12:59:52.02992607 +0000 UTC m=+1464.509236096" Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:52.050856 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c67646cfd-kppgm" podStartSLOduration=33.05083169 podStartE2EDuration="33.05083169s" podCreationTimestamp="2026-01-26 12:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:52.042722992 +0000 UTC m=+1464.522033018" watchObservedRunningTime="2026-01-26 12:59:52.05083169 +0000 UTC m=+1464.530141716" Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:52.068572 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tqpzh" podStartSLOduration=4.863538595 podStartE2EDuration="42.068554453s" podCreationTimestamp="2026-01-26 12:59:10 +0000 UTC" firstStartedPulling="2026-01-26 12:59:12.358827701 +0000 UTC m=+1424.838137727" lastFinishedPulling="2026-01-26 12:59:49.563843549 +0000 UTC m=+1462.043153585" observedRunningTime="2026-01-26 12:59:52.058428436 +0000 UTC m=+1464.537738482" watchObservedRunningTime="2026-01-26 12:59:52.068554453 +0000 UTC m=+1464.547864479" Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:52.084087 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bf7cc86f8-h94sx" podStartSLOduration=32.991871792 podStartE2EDuration="33.084075341s" podCreationTimestamp="2026-01-26 12:59:19 +0000 UTC" firstStartedPulling="2026-01-26 12:59:51.328290749 +0000 UTC m=+1463.807600775" lastFinishedPulling="2026-01-26 12:59:51.420494308 +0000 UTC m=+1463.899804324" observedRunningTime="2026-01-26 12:59:52.083476697 +0000 UTC m=+1464.562786723" watchObservedRunningTime="2026-01-26 12:59:52.084075341 +0000 UTC m=+1464.563385367" Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:52.106578 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d2634ed-f529-43dc-8a08-54f97ace0d73" path="/var/lib/kubelet/pods/5d2634ed-f529-43dc-8a08-54f97ace0d73/volumes" Jan 26 12:59:52 crc kubenswrapper[4881]: I0126 12:59:52.106996 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b81cab4-a1fe-438e-ae0e-c23a58cc16e7" path="/var/lib/kubelet/pods/7b81cab4-a1fe-438e-ae0e-c23a58cc16e7/volumes" Jan 26 12:59:54 crc kubenswrapper[4881]: I0126 12:59:54.789368 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:59:54 crc kubenswrapper[4881]: I0126 12:59:54.789763 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:59:58 crc kubenswrapper[4881]: I0126 12:59:58.063568 4881 generic.go:334] "Generic (PLEG): container finished" podID="ce1a489b-0795-4817-ad32-7fdf1ea68559" containerID="22a5fcab15992d9e4368d136726c23e8a26a70e1fe25bd5a0a19b101d471cdff" exitCode=0 Jan 26 12:59:58 crc kubenswrapper[4881]: I0126 12:59:58.063652 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wmvz7" event={"ID":"ce1a489b-0795-4817-ad32-7fdf1ea68559","Type":"ContainerDied","Data":"22a5fcab15992d9e4368d136726c23e8a26a70e1fe25bd5a0a19b101d471cdff"} Jan 26 12:59:59 crc kubenswrapper[4881]: I0126 12:59:59.951656 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 12:59:59 crc kubenswrapper[4881]: I0126 12:59:59.952091 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.066027 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.066077 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.083379 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.084230 4881 generic.go:334] "Generic (PLEG): container finished" podID="741cf6ae-617a-440b-b6ec-63dc4e87ff4a" containerID="41785da4333665e17bd7fbffdd5da2933857073e2ad8332d4ec8db10278f0fac" exitCode=0 Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.103453 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqpzh" event={"ID":"741cf6ae-617a-440b-b6ec-63dc4e87ff4a","Type":"ContainerDied","Data":"41785da4333665e17bd7fbffdd5da2933857073e2ad8332d4ec8db10278f0fac"} Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.147035 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f"] Jan 26 13:00:00 crc kubenswrapper[4881]: E0126 13:00:00.147417 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerName="init" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.147433 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerName="init" Jan 26 13:00:00 crc kubenswrapper[4881]: E0126 13:00:00.147445 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerName="dnsmasq-dns" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.147451 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerName="dnsmasq-dns" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.147669 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3adacf9-6f24-44e9-a3f7-03082a2159fd" containerName="dnsmasq-dns" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.148272 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.150472 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.151016 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.185967 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f"] Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.288524 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d409655-cc9c-41d5-81b5-c93d256f63a7-secret-volume\") pod \"collect-profiles-29490540-xpj9f\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.288639 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d409655-cc9c-41d5-81b5-c93d256f63a7-config-volume\") pod \"collect-profiles-29490540-xpj9f\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.290185 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k89v8\" (UniqueName: \"kubernetes.io/projected/6d409655-cc9c-41d5-81b5-c93d256f63a7-kube-api-access-k89v8\") pod \"collect-profiles-29490540-xpj9f\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.391546 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k89v8\" (UniqueName: \"kubernetes.io/projected/6d409655-cc9c-41d5-81b5-c93d256f63a7-kube-api-access-k89v8\") pod \"collect-profiles-29490540-xpj9f\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.391666 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d409655-cc9c-41d5-81b5-c93d256f63a7-secret-volume\") pod \"collect-profiles-29490540-xpj9f\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.391691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d409655-cc9c-41d5-81b5-c93d256f63a7-config-volume\") pod \"collect-profiles-29490540-xpj9f\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.392443 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d409655-cc9c-41d5-81b5-c93d256f63a7-config-volume\") pod \"collect-profiles-29490540-xpj9f\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.398736 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d409655-cc9c-41d5-81b5-c93d256f63a7-secret-volume\") pod \"collect-profiles-29490540-xpj9f\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.415253 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k89v8\" (UniqueName: \"kubernetes.io/projected/6d409655-cc9c-41d5-81b5-c93d256f63a7-kube-api-access-k89v8\") pod \"collect-profiles-29490540-xpj9f\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:00 crc kubenswrapper[4881]: I0126 13:00:00.510848 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:01 crc kubenswrapper[4881]: I0126 13:00:01.094860 4881 generic.go:334] "Generic (PLEG): container finished" podID="d0fc8471-ad65-44cf-bf03-1c037aafdf11" containerID="1571eab50c624a5e28dcac81856d0e028f5740073bc1caf0ff053add137d39b6" exitCode=0 Jan 26 13:00:01 crc kubenswrapper[4881]: I0126 13:00:01.095012 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-t85k4" event={"ID":"d0fc8471-ad65-44cf-bf03-1c037aafdf11","Type":"ContainerDied","Data":"1571eab50c624a5e28dcac81856d0e028f5740073bc1caf0ff053add137d39b6"} Jan 26 13:00:07 crc kubenswrapper[4881]: E0126 13:00:07.882404 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 26 13:00:07 crc kubenswrapper[4881]: E0126 13:00:07.883055 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 26 13:00:07 crc kubenswrapper[4881]: E0126 13:00:07.883262 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.23:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52rx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-nmggw_openstack(cf634913-5017-4a94-a3e7-0c337bb9fb4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:00:07 crc kubenswrapper[4881]: E0126 13:00:07.884476 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-nmggw" podUID="cf634913-5017-4a94-a3e7-0c337bb9fb4d" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.156423 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wmvz7" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.157087 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqpzh" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.168337 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-t85k4" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.215217 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-t85k4" event={"ID":"d0fc8471-ad65-44cf-bf03-1c037aafdf11","Type":"ContainerDied","Data":"f10927e932c0e8b17458e021c4b691abd26f8930a95218ba28585c089a71b3fe"} Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.215257 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f10927e932c0e8b17458e021c4b691abd26f8930a95218ba28585c089a71b3fe" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.215324 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-t85k4" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.222824 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqpzh" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.222823 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqpzh" event={"ID":"741cf6ae-617a-440b-b6ec-63dc4e87ff4a","Type":"ContainerDied","Data":"5e683037fe88a46637c1e0bac39224d753f0b51e3845ed2b5a3ba8f4fb444b44"} Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.223061 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e683037fe88a46637c1e0bac39224d753f0b51e3845ed2b5a3ba8f4fb444b44" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.240997 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wmvz7" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.241140 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wmvz7" event={"ID":"ce1a489b-0795-4817-ad32-7fdf1ea68559","Type":"ContainerDied","Data":"5f3d296d91a3a45e5e8505721d3a1e34051a54d210cacbd3b6a3fae98bfc7b59"} Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.241395 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f3d296d91a3a45e5e8505721d3a1e34051a54d210cacbd3b6a3fae98bfc7b59" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.245873 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z7hk\" (UniqueName: \"kubernetes.io/projected/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-kube-api-access-9z7hk\") pod \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.245955 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-config-data\") pod \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.246029 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-logs\") pod \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.246196 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-db-sync-config-data\") pod \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.246250 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-combined-ca-bundle\") pod \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.246321 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkhwh\" (UniqueName: \"kubernetes.io/projected/d0fc8471-ad65-44cf-bf03-1c037aafdf11-kube-api-access-tkhwh\") pod \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.246360 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-combined-ca-bundle\") pod \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\" (UID: \"d0fc8471-ad65-44cf-bf03-1c037aafdf11\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.246394 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf7mz\" (UniqueName: \"kubernetes.io/projected/ce1a489b-0795-4817-ad32-7fdf1ea68559-kube-api-access-pf7mz\") pod \"ce1a489b-0795-4817-ad32-7fdf1ea68559\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.246443 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-config-data\") pod \"ce1a489b-0795-4817-ad32-7fdf1ea68559\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.246814 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-scripts\") pod \"ce1a489b-0795-4817-ad32-7fdf1ea68559\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.246851 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-credential-keys\") pod \"ce1a489b-0795-4817-ad32-7fdf1ea68559\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.246892 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-combined-ca-bundle\") pod \"ce1a489b-0795-4817-ad32-7fdf1ea68559\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.247215 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-scripts\") pod \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\" (UID: \"741cf6ae-617a-440b-b6ec-63dc4e87ff4a\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.247273 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-fernet-keys\") pod \"ce1a489b-0795-4817-ad32-7fdf1ea68559\" (UID: \"ce1a489b-0795-4817-ad32-7fdf1ea68559\") " Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.249570 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-logs" (OuterVolumeSpecName: "logs") pod "741cf6ae-617a-440b-b6ec-63dc4e87ff4a" (UID: "741cf6ae-617a-440b-b6ec-63dc4e87ff4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.252254 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: E0126 13:00:08.266176 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-nmggw" podUID="cf634913-5017-4a94-a3e7-0c337bb9fb4d" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.271981 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0fc8471-ad65-44cf-bf03-1c037aafdf11-kube-api-access-tkhwh" (OuterVolumeSpecName: "kube-api-access-tkhwh") pod "d0fc8471-ad65-44cf-bf03-1c037aafdf11" (UID: "d0fc8471-ad65-44cf-bf03-1c037aafdf11"). InnerVolumeSpecName "kube-api-access-tkhwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.272137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d0fc8471-ad65-44cf-bf03-1c037aafdf11" (UID: "d0fc8471-ad65-44cf-bf03-1c037aafdf11"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.272753 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ce1a489b-0795-4817-ad32-7fdf1ea68559" (UID: "ce1a489b-0795-4817-ad32-7fdf1ea68559"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.273348 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-kube-api-access-9z7hk" (OuterVolumeSpecName: "kube-api-access-9z7hk") pod "741cf6ae-617a-440b-b6ec-63dc4e87ff4a" (UID: "741cf6ae-617a-440b-b6ec-63dc4e87ff4a"). InnerVolumeSpecName "kube-api-access-9z7hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.283963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ce1a489b-0795-4817-ad32-7fdf1ea68559" (UID: "ce1a489b-0795-4817-ad32-7fdf1ea68559"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.291021 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-scripts" (OuterVolumeSpecName: "scripts") pod "ce1a489b-0795-4817-ad32-7fdf1ea68559" (UID: "ce1a489b-0795-4817-ad32-7fdf1ea68559"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.298971 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1a489b-0795-4817-ad32-7fdf1ea68559-kube-api-access-pf7mz" (OuterVolumeSpecName: "kube-api-access-pf7mz") pod "ce1a489b-0795-4817-ad32-7fdf1ea68559" (UID: "ce1a489b-0795-4817-ad32-7fdf1ea68559"). InnerVolumeSpecName "kube-api-access-pf7mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.299923 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-scripts" (OuterVolumeSpecName: "scripts") pod "741cf6ae-617a-440b-b6ec-63dc4e87ff4a" (UID: "741cf6ae-617a-440b-b6ec-63dc4e87ff4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.319841 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0fc8471-ad65-44cf-bf03-1c037aafdf11" (UID: "d0fc8471-ad65-44cf-bf03-1c037aafdf11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.322510 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "741cf6ae-617a-440b-b6ec-63dc4e87ff4a" (UID: "741cf6ae-617a-440b-b6ec-63dc4e87ff4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.324558 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-config-data" (OuterVolumeSpecName: "config-data") pod "741cf6ae-617a-440b-b6ec-63dc4e87ff4a" (UID: "741cf6ae-617a-440b-b6ec-63dc4e87ff4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.335916 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-config-data" (OuterVolumeSpecName: "config-data") pod "ce1a489b-0795-4817-ad32-7fdf1ea68559" (UID: "ce1a489b-0795-4817-ad32-7fdf1ea68559"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.336632 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce1a489b-0795-4817-ad32-7fdf1ea68559" (UID: "ce1a489b-0795-4817-ad32-7fdf1ea68559"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.354192 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkhwh\" (UniqueName: \"kubernetes.io/projected/d0fc8471-ad65-44cf-bf03-1c037aafdf11-kube-api-access-tkhwh\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.354430 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.354507 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf7mz\" (UniqueName: \"kubernetes.io/projected/ce1a489b-0795-4817-ad32-7fdf1ea68559-kube-api-access-pf7mz\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.354609 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.354677 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.354741 4881 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.354808 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.354868 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.354926 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce1a489b-0795-4817-ad32-7fdf1ea68559-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.354992 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z7hk\" (UniqueName: \"kubernetes.io/projected/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-kube-api-access-9z7hk\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.355054 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.355118 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0fc8471-ad65-44cf-bf03-1c037aafdf11-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.355182 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741cf6ae-617a-440b-b6ec-63dc4e87ff4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:08 crc kubenswrapper[4881]: I0126 13:00:08.561860 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f"] Jan 26 13:00:08 crc kubenswrapper[4881]: W0126 13:00:08.565210 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d409655_cc9c_41d5_81b5_c93d256f63a7.slice/crio-1ddf1d6cc3d023ce155b57e6b26d48d4eac0672dda0e1c94853a0d3e0afb3803 WatchSource:0}: Error finding container 1ddf1d6cc3d023ce155b57e6b26d48d4eac0672dda0e1c94853a0d3e0afb3803: Status 404 returned error can't find the container with id 1ddf1d6cc3d023ce155b57e6b26d48d4eac0672dda0e1c94853a0d3e0afb3803 Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.245587 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-5pg9p" event={"ID":"adf01549-e1d0-46a7-a141-bdc0f5c81458","Type":"ContainerStarted","Data":"bd0fbc078d487b6644022e19f21a266beb30fb96b50007588c33cb0891afcb86"} Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.248172 4881 generic.go:334] "Generic (PLEG): container finished" podID="6d409655-cc9c-41d5-81b5-c93d256f63a7" containerID="b859685fedd089c047b2df9e5034658f73a5946765d70777f13a64d73a906bdf" exitCode=0 Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.248267 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" event={"ID":"6d409655-cc9c-41d5-81b5-c93d256f63a7","Type":"ContainerDied","Data":"b859685fedd089c047b2df9e5034658f73a5946765d70777f13a64d73a906bdf"} Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.248308 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" event={"ID":"6d409655-cc9c-41d5-81b5-c93d256f63a7","Type":"ContainerStarted","Data":"1ddf1d6cc3d023ce155b57e6b26d48d4eac0672dda0e1c94853a0d3e0afb3803"} Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.252715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lgk7s" event={"ID":"132298e2-a2f4-4311-9f7a-3e4e08abe34b","Type":"ContainerStarted","Data":"7c4f86fa4c3c9b13b85178d0fb4974e800c168588a5a7177759de691d923a06a"} Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.254563 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75a85372-d728-4770-8639-fb6f93e44dab","Type":"ContainerStarted","Data":"13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9"} Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.281805 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-5pg9p" podStartSLOduration=2.174758304 podStartE2EDuration="1m0.281786241s" podCreationTimestamp="2026-01-26 12:59:09 +0000 UTC" firstStartedPulling="2026-01-26 12:59:09.897791672 +0000 UTC m=+1422.377101698" lastFinishedPulling="2026-01-26 13:00:08.004819609 +0000 UTC m=+1480.484129635" observedRunningTime="2026-01-26 13:00:09.277918327 +0000 UTC m=+1481.757228353" watchObservedRunningTime="2026-01-26 13:00:09.281786241 +0000 UTC m=+1481.761096267" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.302573 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-76cf66855-bgjld"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.307652 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-lgk7s" podStartSLOduration=3.552101631 podStartE2EDuration="59.307630911s" podCreationTimestamp="2026-01-26 12:59:10 +0000 UTC" firstStartedPulling="2026-01-26 12:59:12.249260358 +0000 UTC m=+1424.728570384" lastFinishedPulling="2026-01-26 13:00:08.004789638 +0000 UTC m=+1480.484099664" observedRunningTime="2026-01-26 13:00:09.303843619 +0000 UTC m=+1481.783153635" watchObservedRunningTime="2026-01-26 13:00:09.307630911 +0000 UTC m=+1481.786940927" Jan 26 13:00:09 crc kubenswrapper[4881]: E0126 13:00:09.309913 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741cf6ae-617a-440b-b6ec-63dc4e87ff4a" containerName="placement-db-sync" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.309961 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="741cf6ae-617a-440b-b6ec-63dc4e87ff4a" containerName="placement-db-sync" Jan 26 13:00:09 crc kubenswrapper[4881]: E0126 13:00:09.309981 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1a489b-0795-4817-ad32-7fdf1ea68559" containerName="keystone-bootstrap" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.309989 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1a489b-0795-4817-ad32-7fdf1ea68559" containerName="keystone-bootstrap" Jan 26 13:00:09 crc kubenswrapper[4881]: E0126 13:00:09.310016 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fc8471-ad65-44cf-bf03-1c037aafdf11" containerName="barbican-db-sync" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.310024 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fc8471-ad65-44cf-bf03-1c037aafdf11" containerName="barbican-db-sync" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.310229 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1a489b-0795-4817-ad32-7fdf1ea68559" containerName="keystone-bootstrap" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.310270 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="741cf6ae-617a-440b-b6ec-63dc4e87ff4a" containerName="placement-db-sync" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.310295 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fc8471-ad65-44cf-bf03-1c037aafdf11" containerName="barbican-db-sync" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.311190 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.314191 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.314379 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p4c6x" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.317219 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.317350 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.317536 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.317660 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.345264 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76cf66855-bgjld"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.424755 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-54766b76bb-mkjc2"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.427017 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.437947 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.438126 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nlt5l" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.438235 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.438342 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.438439 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.450931 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54766b76bb-mkjc2"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.479424 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-scripts\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.479470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-config-data\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.479494 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-fernet-keys\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.479541 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-combined-ca-bundle\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.479565 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-credential-keys\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.479624 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-internal-tls-certs\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.479657 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-public-tls-certs\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.479673 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lmq9\" (UniqueName: \"kubernetes.io/projected/bda630e6-c611-4029-9a8a-b347189d2fab-kube-api-access-6lmq9\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.503638 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7b9f5df6bf-2dqqf"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.505158 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.510116 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vsl7c" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.510373 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.510385 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.516605 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7b9f5df6bf-2dqqf"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.533968 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57cc97484d-kbk4q"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.535459 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.540653 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.550008 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57cc97484d-kbk4q"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.581674 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-fernet-keys\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.581739 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-combined-ca-bundle\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.581777 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34699df9-2dd8-4eee-9f19-e5af28cfa84d-logs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.581796 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-combined-ca-bundle\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.581817 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-credential-keys\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.581875 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpvqs\" (UniqueName: \"kubernetes.io/projected/34699df9-2dd8-4eee-9f19-e5af28cfa84d-kube-api-access-fpvqs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.581904 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-internal-tls-certs\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.581947 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-public-tls-certs\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.581968 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-public-tls-certs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.581989 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lmq9\" (UniqueName: \"kubernetes.io/projected/bda630e6-c611-4029-9a8a-b347189d2fab-kube-api-access-6lmq9\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.582009 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-config-data\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.582034 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-scripts\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.582055 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-internal-tls-certs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.582094 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-scripts\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.582110 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-config-data\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.592597 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-config-data\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.594165 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-public-tls-certs\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.603784 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-internal-tls-certs\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.604538 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-fernet-keys\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.605109 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-credential-keys\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.613172 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-combined-ca-bundle\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.627925 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lmq9\" (UniqueName: \"kubernetes.io/projected/bda630e6-c611-4029-9a8a-b347189d2fab-kube-api-access-6lmq9\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.632716 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bda630e6-c611-4029-9a8a-b347189d2fab-scripts\") pod \"keystone-76cf66855-bgjld\" (UID: \"bda630e6-c611-4029-9a8a-b347189d2fab\") " pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.674584 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-b5f886498-f6c5n"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.676137 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685673 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685731 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-logs\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685773 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpvqs\" (UniqueName: \"kubernetes.io/projected/34699df9-2dd8-4eee-9f19-e5af28cfa84d-kube-api-access-fpvqs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685793 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256bb8f2-ab6b-4904-bfda-793cd640e966-logs\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685833 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-public-tls-certs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685854 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-config-data\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfg9q\" (UniqueName: \"kubernetes.io/projected/256bb8f2-ab6b-4904-bfda-793cd640e966-kube-api-access-nfg9q\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685898 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-scripts\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685914 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-internal-tls-certs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.685975 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-combined-ca-bundle\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.686008 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data-custom\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.686026 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7fqt\" (UniqueName: \"kubernetes.io/projected/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-kube-api-access-b7fqt\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.686041 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34699df9-2dd8-4eee-9f19-e5af28cfa84d-logs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.686059 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-combined-ca-bundle\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.686087 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data-custom\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.686106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-combined-ca-bundle\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.686918 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34699df9-2dd8-4eee-9f19-e5af28cfa84d-logs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.699496 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-internal-tls-certs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.702070 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-scripts\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.712919 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-combined-ca-bundle\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.715320 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-public-tls-certs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.715894 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34699df9-2dd8-4eee-9f19-e5af28cfa84d-config-data\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.716637 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-794958b545-pcbpb"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.718087 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.746295 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpvqs\" (UniqueName: \"kubernetes.io/projected/34699df9-2dd8-4eee-9f19-e5af28cfa84d-kube-api-access-fpvqs\") pod \"placement-54766b76bb-mkjc2\" (UID: \"34699df9-2dd8-4eee-9f19-e5af28cfa84d\") " pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.751784 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.752691 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b5f886498-f6c5n"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.788705 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dc855bcb7-bhvs2"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.825510 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c982f75-f27c-4915-a75d-07f2fc53cf19-combined-ca-bundle\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.825628 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfg9q\" (UniqueName: \"kubernetes.io/projected/256bb8f2-ab6b-4904-bfda-793cd640e966-kube-api-access-nfg9q\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.825689 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c982f75-f27c-4915-a75d-07f2fc53cf19-config-data\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.825797 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.825910 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-combined-ca-bundle\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.825995 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data-custom\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.826016 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7fqt\" (UniqueName: \"kubernetes.io/projected/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-kube-api-access-b7fqt\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.826066 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data-custom\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.826093 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-combined-ca-bundle\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.826132 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c982f75-f27c-4915-a75d-07f2fc53cf19-config-data-custom\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.826210 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.826243 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-logs\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.826276 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm8df\" (UniqueName: \"kubernetes.io/projected/2c982f75-f27c-4915-a75d-07f2fc53cf19-kube-api-access-sm8df\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.826307 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256bb8f2-ab6b-4904-bfda-793cd640e966-logs\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.826325 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c982f75-f27c-4915-a75d-07f2fc53cf19-logs\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.827952 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-logs\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.829698 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.831168 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data-custom\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.833929 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.836259 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-combined-ca-bundle\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.836433 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data-custom\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.841117 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256bb8f2-ab6b-4904-bfda-793cd640e966-logs\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.863910 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-794958b545-pcbpb"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.893699 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.898338 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfg9q\" (UniqueName: \"kubernetes.io/projected/256bb8f2-ab6b-4904-bfda-793cd640e966-kube-api-access-nfg9q\") pod \"barbican-worker-7b9f5df6bf-2dqqf\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.900989 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7fqt\" (UniqueName: \"kubernetes.io/projected/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-kube-api-access-b7fqt\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.904540 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-combined-ca-bundle\") pod \"barbican-keystone-listener-57cc97484d-kbk4q\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.907630 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dc855bcb7-bhvs2"] Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.939251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c982f75-f27c-4915-a75d-07f2fc53cf19-logs\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.939305 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/275be453-0a36-451c-8a70-714b958c9625-config-data-custom\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.939335 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c982f75-f27c-4915-a75d-07f2fc53cf19-combined-ca-bundle\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.939363 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275be453-0a36-451c-8a70-714b958c9625-config-data\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.939393 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c982f75-f27c-4915-a75d-07f2fc53cf19-config-data\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.939408 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275be453-0a36-451c-8a70-714b958c9625-combined-ca-bundle\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.939433 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rntfh\" (UniqueName: \"kubernetes.io/projected/275be453-0a36-451c-8a70-714b958c9625-kube-api-access-rntfh\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.939501 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/275be453-0a36-451c-8a70-714b958c9625-logs\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.939721 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c982f75-f27c-4915-a75d-07f2fc53cf19-config-data-custom\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.939767 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm8df\" (UniqueName: \"kubernetes.io/projected/2c982f75-f27c-4915-a75d-07f2fc53cf19-kube-api-access-sm8df\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.941039 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.943167 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c982f75-f27c-4915-a75d-07f2fc53cf19-logs\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.950995 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c982f75-f27c-4915-a75d-07f2fc53cf19-config-data\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.965419 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm8df\" (UniqueName: \"kubernetes.io/projected/2c982f75-f27c-4915-a75d-07f2fc53cf19-kube-api-access-sm8df\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.965621 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c982f75-f27c-4915-a75d-07f2fc53cf19-config-data-custom\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.995889 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c982f75-f27c-4915-a75d-07f2fc53cf19-combined-ca-bundle\") pod \"barbican-keystone-listener-b5f886498-f6c5n\" (UID: \"2c982f75-f27c-4915-a75d-07f2fc53cf19\") " pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:09 crc kubenswrapper[4881]: I0126 13:00:09.995969 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75d7497f7d-ksr89"] Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.019231 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.019119 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75d7497f7d-ksr89"] Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.021268 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043015 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275be453-0a36-451c-8a70-714b958c9625-combined-ca-bundle\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043065 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rntfh\" (UniqueName: \"kubernetes.io/projected/275be453-0a36-451c-8a70-714b958c9625-kube-api-access-rntfh\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043139 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-svc\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043188 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/275be453-0a36-451c-8a70-714b958c9625-logs\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043220 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-swift-storage-0\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043242 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043260 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-config\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043286 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cwq6\" (UniqueName: \"kubernetes.io/projected/580e9345-d40f-4136-a7b3-9d5dc370cea7-kube-api-access-9cwq6\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043318 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/275be453-0a36-451c-8a70-714b958c9625-config-data-custom\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043341 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275be453-0a36-451c-8a70-714b958c9625-config-data\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.043956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/275be453-0a36-451c-8a70-714b958c9625-logs\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.060879 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275be453-0a36-451c-8a70-714b958c9625-combined-ca-bundle\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.062506 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275be453-0a36-451c-8a70-714b958c9625-config-data\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.064669 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/275be453-0a36-451c-8a70-714b958c9625-config-data-custom\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.078484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rntfh\" (UniqueName: \"kubernetes.io/projected/275be453-0a36-451c-8a70-714b958c9625-kube-api-access-rntfh\") pod \"barbican-worker-794958b545-pcbpb\" (UID: \"275be453-0a36-451c-8a70-714b958c9625\") " pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.135246 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.139624 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-svc\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144267 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-combined-ca-bundle\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144300 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-swift-storage-0\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144328 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144347 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-config\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144370 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cwq6\" (UniqueName: \"kubernetes.io/projected/580e9345-d40f-4136-a7b3-9d5dc370cea7-kube-api-access-9cwq6\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144394 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data-custom\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144423 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144439 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144458 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748ff944-3c37-434b-8ce3-6cff7e627ea0-logs\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.144491 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcgfv\" (UniqueName: \"kubernetes.io/projected/748ff944-3c37-434b-8ce3-6cff7e627ea0-kube-api-access-fcgfv\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.145322 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-svc\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.145867 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-swift-storage-0\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.146403 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.146920 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-config\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.148572 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.156910 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.163381 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cwq6\" (UniqueName: \"kubernetes.io/projected/580e9345-d40f-4136-a7b3-9d5dc370cea7-kube-api-access-9cwq6\") pod \"dnsmasq-dns-6dc855bcb7-bhvs2\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.250753 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data-custom\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.250805 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.250832 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748ff944-3c37-434b-8ce3-6cff7e627ea0-logs\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.250868 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcgfv\" (UniqueName: \"kubernetes.io/projected/748ff944-3c37-434b-8ce3-6cff7e627ea0-kube-api-access-fcgfv\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.251626 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-combined-ca-bundle\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.251972 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748ff944-3c37-434b-8ce3-6cff7e627ea0-logs\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.262142 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data-custom\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.270731 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.271610 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-combined-ca-bundle\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.277877 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcgfv\" (UniqueName: \"kubernetes.io/projected/748ff944-3c37-434b-8ce3-6cff7e627ea0-kube-api-access-fcgfv\") pod \"barbican-api-75d7497f7d-ksr89\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.310476 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-794958b545-pcbpb" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.317567 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.343853 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.389157 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54766b76bb-mkjc2"] Jan 26 13:00:10 crc kubenswrapper[4881]: W0126 13:00:10.493116 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34699df9_2dd8_4eee_9f19_e5af28cfa84d.slice/crio-e01768502a7573c73f0eb92fe53f9fbc86e8e2e38aac27cef7d5567cd2faa63b WatchSource:0}: Error finding container e01768502a7573c73f0eb92fe53f9fbc86e8e2e38aac27cef7d5567cd2faa63b: Status 404 returned error can't find the container with id e01768502a7573c73f0eb92fe53f9fbc86e8e2e38aac27cef7d5567cd2faa63b Jan 26 13:00:10 crc kubenswrapper[4881]: I0126 13:00:10.656491 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76cf66855-bgjld"] Jan 26 13:00:10 crc kubenswrapper[4881]: E0126 13:00:10.987284 4881 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.303617 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.324161 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57cc97484d-kbk4q"] Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.357194 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" event={"ID":"6d409655-cc9c-41d5-81b5-c93d256f63a7","Type":"ContainerDied","Data":"1ddf1d6cc3d023ce155b57e6b26d48d4eac0672dda0e1c94853a0d3e0afb3803"} Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.357480 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ddf1d6cc3d023ce155b57e6b26d48d4eac0672dda0e1c94853a0d3e0afb3803" Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.357550 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f" Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.387234 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76cf66855-bgjld" event={"ID":"bda630e6-c611-4029-9a8a-b347189d2fab","Type":"ContainerStarted","Data":"52db7881066929bea36d92c7ee63f6ed5aad0197e27e210e6cc49a2d631a770a"} Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.388365 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k89v8\" (UniqueName: \"kubernetes.io/projected/6d409655-cc9c-41d5-81b5-c93d256f63a7-kube-api-access-k89v8\") pod \"6d409655-cc9c-41d5-81b5-c93d256f63a7\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.388500 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d409655-cc9c-41d5-81b5-c93d256f63a7-secret-volume\") pod \"6d409655-cc9c-41d5-81b5-c93d256f63a7\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.393066 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d409655-cc9c-41d5-81b5-c93d256f63a7-config-volume\") pod \"6d409655-cc9c-41d5-81b5-c93d256f63a7\" (UID: \"6d409655-cc9c-41d5-81b5-c93d256f63a7\") " Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.394478 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d409655-cc9c-41d5-81b5-c93d256f63a7-config-volume" (OuterVolumeSpecName: "config-volume") pod "6d409655-cc9c-41d5-81b5-c93d256f63a7" (UID: "6d409655-cc9c-41d5-81b5-c93d256f63a7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.395677 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d409655-cc9c-41d5-81b5-c93d256f63a7-kube-api-access-k89v8" (OuterVolumeSpecName: "kube-api-access-k89v8") pod "6d409655-cc9c-41d5-81b5-c93d256f63a7" (UID: "6d409655-cc9c-41d5-81b5-c93d256f63a7"). InnerVolumeSpecName "kube-api-access-k89v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.400625 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d409655-cc9c-41d5-81b5-c93d256f63a7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6d409655-cc9c-41d5-81b5-c93d256f63a7" (UID: "6d409655-cc9c-41d5-81b5-c93d256f63a7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.408980 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54766b76bb-mkjc2" event={"ID":"34699df9-2dd8-4eee-9f19-e5af28cfa84d","Type":"ContainerStarted","Data":"e01768502a7573c73f0eb92fe53f9fbc86e8e2e38aac27cef7d5567cd2faa63b"} Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.508211 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k89v8\" (UniqueName: \"kubernetes.io/projected/6d409655-cc9c-41d5-81b5-c93d256f63a7-kube-api-access-k89v8\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.508236 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d409655-cc9c-41d5-81b5-c93d256f63a7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.508245 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d409655-cc9c-41d5-81b5-c93d256f63a7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.721418 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75d7497f7d-ksr89"] Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.744565 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b5f886498-f6c5n"] Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.761633 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7b9f5df6bf-2dqqf"] Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.795136 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dc855bcb7-bhvs2"] Jan 26 13:00:11 crc kubenswrapper[4881]: W0126 13:00:11.806822 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod580e9345_d40f_4136_a7b3_9d5dc370cea7.slice/crio-89aaad3d5fe744ce77573e07232705391948150ad5dc14b08d432f9599af7fff WatchSource:0}: Error finding container 89aaad3d5fe744ce77573e07232705391948150ad5dc14b08d432f9599af7fff: Status 404 returned error can't find the container with id 89aaad3d5fe744ce77573e07232705391948150ad5dc14b08d432f9599af7fff Jan 26 13:00:11 crc kubenswrapper[4881]: I0126 13:00:11.838824 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-794958b545-pcbpb"] Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.429120 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" event={"ID":"256bb8f2-ab6b-4904-bfda-793cd640e966","Type":"ContainerStarted","Data":"80ef46edf778391f784bce86ee073f10a5e661f08ea3dc2209b973b1e7782427"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.431049 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" event={"ID":"2c982f75-f27c-4915-a75d-07f2fc53cf19","Type":"ContainerStarted","Data":"d05940e7d28404975e2921015919a26fc28f6d624ba7dd6b8e159d0dcda70556"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.434081 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76cf66855-bgjld" event={"ID":"bda630e6-c611-4029-9a8a-b347189d2fab","Type":"ContainerStarted","Data":"23850fc9e5ba57f9821b7168e493c50e758daea18988996876c3bfbdf0d38f4f"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.434570 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.437917 4881 generic.go:334] "Generic (PLEG): container finished" podID="580e9345-d40f-4136-a7b3-9d5dc370cea7" containerID="50ca60c947435e94c14f8b5967447038192cbc61fb3fe4b420c7c4a1b8f16964" exitCode=0 Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.438841 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" event={"ID":"580e9345-d40f-4136-a7b3-9d5dc370cea7","Type":"ContainerDied","Data":"50ca60c947435e94c14f8b5967447038192cbc61fb3fe4b420c7c4a1b8f16964"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.438864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" event={"ID":"580e9345-d40f-4136-a7b3-9d5dc370cea7","Type":"ContainerStarted","Data":"89aaad3d5fe744ce77573e07232705391948150ad5dc14b08d432f9599af7fff"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.439968 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" event={"ID":"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5","Type":"ContainerStarted","Data":"0e7b7a1fe99ebac0ec03e1907fcee25d845c0512ef39966b85cddab6119832ac"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.445167 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-794958b545-pcbpb" event={"ID":"275be453-0a36-451c-8a70-714b958c9625","Type":"ContainerStarted","Data":"93a6354a0affb1eb5c9f325d5ff915807165e68abfbab6152302d4f77b24a9c6"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.455074 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-76cf66855-bgjld" podStartSLOduration=3.455062099 podStartE2EDuration="3.455062099s" podCreationTimestamp="2026-01-26 13:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:12.452593028 +0000 UTC m=+1484.931903074" watchObservedRunningTime="2026-01-26 13:00:12.455062099 +0000 UTC m=+1484.934372125" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.468405 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54766b76bb-mkjc2" event={"ID":"34699df9-2dd8-4eee-9f19-e5af28cfa84d","Type":"ContainerStarted","Data":"c6783da1225dea5401d91a0c0ea92bdba6d6d65c8bcd1270020b70e92a5fe804"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.468645 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54766b76bb-mkjc2" event={"ID":"34699df9-2dd8-4eee-9f19-e5af28cfa84d","Type":"ContainerStarted","Data":"351317c4db3847475e9bce8133ff151ee3565c3a079f88d78a763ed86e03cd1a"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.468736 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.471380 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75d7497f7d-ksr89" event={"ID":"748ff944-3c37-434b-8ce3-6cff7e627ea0","Type":"ContainerStarted","Data":"17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.473550 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.473573 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.473583 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75d7497f7d-ksr89" event={"ID":"748ff944-3c37-434b-8ce3-6cff7e627ea0","Type":"ContainerStarted","Data":"24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.473598 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75d7497f7d-ksr89" event={"ID":"748ff944-3c37-434b-8ce3-6cff7e627ea0","Type":"ContainerStarted","Data":"a565aa7425ac9eabb2ed694239f00364f4b62097a58afb17231d592be47671e4"} Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.519679 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-54766b76bb-mkjc2" podStartSLOduration=3.519659764 podStartE2EDuration="3.519659764s" podCreationTimestamp="2026-01-26 13:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:12.509593269 +0000 UTC m=+1484.988903295" watchObservedRunningTime="2026-01-26 13:00:12.519659764 +0000 UTC m=+1484.998969790" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.552430 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75d7497f7d-ksr89" podStartSLOduration=3.5524130940000003 podStartE2EDuration="3.552413094s" podCreationTimestamp="2026-01-26 13:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:12.538843132 +0000 UTC m=+1485.018153158" watchObservedRunningTime="2026-01-26 13:00:12.552413094 +0000 UTC m=+1485.031723120" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.916843 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b8956874b-lrmsj"] Jan 26 13:00:12 crc kubenswrapper[4881]: E0126 13:00:12.917594 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d409655-cc9c-41d5-81b5-c93d256f63a7" containerName="collect-profiles" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.917667 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d409655-cc9c-41d5-81b5-c93d256f63a7" containerName="collect-profiles" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.917887 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d409655-cc9c-41d5-81b5-c93d256f63a7" containerName="collect-profiles" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.918966 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.924479 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.925161 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.935968 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b8956874b-lrmsj"] Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.945379 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-public-tls-certs\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.945468 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-internal-tls-certs\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.945494 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-logs\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.945524 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-combined-ca-bundle\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.945587 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-config-data\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.945605 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-config-data-custom\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:12 crc kubenswrapper[4881]: I0126 13:00:12.945637 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zphjg\" (UniqueName: \"kubernetes.io/projected/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-kube-api-access-zphjg\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.047248 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-config-data\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.048292 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-config-data-custom\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.048440 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zphjg\" (UniqueName: \"kubernetes.io/projected/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-kube-api-access-zphjg\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.048543 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-public-tls-certs\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.048711 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-internal-tls-certs\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.048749 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-logs\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.048776 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-combined-ca-bundle\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.052273 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-public-tls-certs\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.052441 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-config-data-custom\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.052847 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-logs\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.054992 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-internal-tls-certs\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.055608 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-config-data\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.074859 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-combined-ca-bundle\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.083814 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zphjg\" (UniqueName: \"kubernetes.io/projected/a3ad04d4-bcff-4ed6-8648-be146e3ce20a-kube-api-access-zphjg\") pod \"barbican-api-5b8956874b-lrmsj\" (UID: \"a3ad04d4-bcff-4ed6-8648-be146e3ce20a\") " pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.136128 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.262723 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.271206 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.485867 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" event={"ID":"580e9345-d40f-4136-a7b3-9d5dc370cea7","Type":"ContainerStarted","Data":"9dfecc08f42ddbe09db9e1e13bafa58158ead8389ee932b570c14bf036ffa0b0"} Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.486604 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.486644 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:13 crc kubenswrapper[4881]: I0126 13:00:13.507114 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" podStartSLOduration=4.507090466 podStartE2EDuration="4.507090466s" podCreationTimestamp="2026-01-26 13:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:13.50275023 +0000 UTC m=+1485.982060256" watchObservedRunningTime="2026-01-26 13:00:13.507090466 +0000 UTC m=+1485.986400492" Jan 26 13:00:14 crc kubenswrapper[4881]: I0126 13:00:14.931202 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7bf7cc86f8-h94sx" Jan 26 13:00:14 crc kubenswrapper[4881]: I0126 13:00:14.991472 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c67646cfd-kppgm"] Jan 26 13:00:14 crc kubenswrapper[4881]: I0126 13:00:14.991921 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c67646cfd-kppgm" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon-log" containerID="cri-o://785ba06ccaceb90529fed9d4bfe616ece3f2212b557578bbd30e62ec93faabac" gracePeriod=30 Jan 26 13:00:14 crc kubenswrapper[4881]: I0126 13:00:14.992371 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c67646cfd-kppgm" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon" containerID="cri-o://11becbe66e90b27a1d407833f793333eb538d4d8b813396d1a62681ea0806353" gracePeriod=30 Jan 26 13:00:15 crc kubenswrapper[4881]: I0126 13:00:15.005805 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c67646cfd-kppgm" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.157:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 26 13:00:15 crc kubenswrapper[4881]: I0126 13:00:15.007615 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c67646cfd-kppgm" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.157:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:45306->10.217.0.157:8443: read: connection reset by peer" Jan 26 13:00:16 crc kubenswrapper[4881]: I0126 13:00:16.513826 4881 generic.go:334] "Generic (PLEG): container finished" podID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerID="11becbe66e90b27a1d407833f793333eb538d4d8b813396d1a62681ea0806353" exitCode=0 Jan 26 13:00:16 crc kubenswrapper[4881]: I0126 13:00:16.513895 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67646cfd-kppgm" event={"ID":"6b0cbe35-c0c9-4483-866a-eddf1fdced26","Type":"ContainerDied","Data":"11becbe66e90b27a1d407833f793333eb538d4d8b813396d1a62681ea0806353"} Jan 26 13:00:17 crc kubenswrapper[4881]: I0126 13:00:17.526705 4881 generic.go:334] "Generic (PLEG): container finished" podID="adf01549-e1d0-46a7-a141-bdc0f5c81458" containerID="bd0fbc078d487b6644022e19f21a266beb30fb96b50007588c33cb0891afcb86" exitCode=0 Jan 26 13:00:17 crc kubenswrapper[4881]: I0126 13:00:17.526746 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-5pg9p" event={"ID":"adf01549-e1d0-46a7-a141-bdc0f5c81458","Type":"ContainerDied","Data":"bd0fbc078d487b6644022e19f21a266beb30fb96b50007588c33cb0891afcb86"} Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.180944 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-5pg9p" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.279242 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-config-data\") pod \"adf01549-e1d0-46a7-a141-bdc0f5c81458\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.279319 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-combined-ca-bundle\") pod \"adf01549-e1d0-46a7-a141-bdc0f5c81458\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.279460 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2prlp\" (UniqueName: \"kubernetes.io/projected/adf01549-e1d0-46a7-a141-bdc0f5c81458-kube-api-access-2prlp\") pod \"adf01549-e1d0-46a7-a141-bdc0f5c81458\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.279596 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-db-sync-config-data\") pod \"adf01549-e1d0-46a7-a141-bdc0f5c81458\" (UID: \"adf01549-e1d0-46a7-a141-bdc0f5c81458\") " Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.285917 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "adf01549-e1d0-46a7-a141-bdc0f5c81458" (UID: "adf01549-e1d0-46a7-a141-bdc0f5c81458"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.285942 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf01549-e1d0-46a7-a141-bdc0f5c81458-kube-api-access-2prlp" (OuterVolumeSpecName: "kube-api-access-2prlp") pod "adf01549-e1d0-46a7-a141-bdc0f5c81458" (UID: "adf01549-e1d0-46a7-a141-bdc0f5c81458"). InnerVolumeSpecName "kube-api-access-2prlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.318592 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adf01549-e1d0-46a7-a141-bdc0f5c81458" (UID: "adf01549-e1d0-46a7-a141-bdc0f5c81458"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.328835 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-config-data" (OuterVolumeSpecName: "config-data") pod "adf01549-e1d0-46a7-a141-bdc0f5c81458" (UID: "adf01549-e1d0-46a7-a141-bdc0f5c81458"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.381896 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.381928 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.381936 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf01549-e1d0-46a7-a141-bdc0f5c81458-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.381946 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2prlp\" (UniqueName: \"kubernetes.io/projected/adf01549-e1d0-46a7-a141-bdc0f5c81458-kube-api-access-2prlp\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.547632 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-5pg9p" event={"ID":"adf01549-e1d0-46a7-a141-bdc0f5c81458","Type":"ContainerDied","Data":"279d38e7433d24020184e4b6f315442fc2afbe8630e715d7e2a20de0ac3836a3"} Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.547670 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279d38e7433d24020184e4b6f315442fc2afbe8630e715d7e2a20de0ac3836a3" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.547697 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-5pg9p" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.826987 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 26 13:00:19 crc kubenswrapper[4881]: E0126 13:00:19.827368 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf01549-e1d0-46a7-a141-bdc0f5c81458" containerName="watcher-db-sync" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.827389 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf01549-e1d0-46a7-a141-bdc0f5c81458" containerName="watcher-db-sync" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.827615 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf01549-e1d0-46a7-a141-bdc0f5c81458" containerName="watcher-db-sync" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.828248 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.832159 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-55qnx" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.832361 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.844611 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.859565 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.861279 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.861406 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.864858 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.891311 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b363415-cc56-4505-91f6-f9700b378625-config-data\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.891373 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59wsd\" (UniqueName: \"kubernetes.io/projected/94aabe79-a699-4980-a344-43c629e34627-kube-api-access-59wsd\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.891425 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-config-data\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.891456 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b363415-cc56-4505-91f6-f9700b378625-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.891471 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7xt\" (UniqueName: \"kubernetes.io/projected/1b363415-cc56-4505-91f6-f9700b378625-kube-api-access-hr7xt\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.891506 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b363415-cc56-4505-91f6-f9700b378625-logs\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.891554 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.891631 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94aabe79-a699-4980-a344-43c629e34627-logs\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.891679 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.898980 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.900164 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.903353 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.952948 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.953056 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c67646cfd-kppgm" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.157:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8443: connect: connection refused" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.992907 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58m7d\" (UniqueName: \"kubernetes.io/projected/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-kube-api-access-58m7d\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.992955 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b363415-cc56-4505-91f6-f9700b378625-logs\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.992989 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993012 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993033 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94aabe79-a699-4980-a344-43c629e34627-logs\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993047 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993162 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993195 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b363415-cc56-4505-91f6-f9700b378625-config-data\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993223 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59wsd\" (UniqueName: \"kubernetes.io/projected/94aabe79-a699-4980-a344-43c629e34627-kube-api-access-59wsd\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993263 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-logs\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993279 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-config-data\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993629 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b363415-cc56-4505-91f6-f9700b378625-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993660 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr7xt\" (UniqueName: \"kubernetes.io/projected/1b363415-cc56-4505-91f6-f9700b378625-kube-api-access-hr7xt\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.993682 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-config-data\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.994966 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b363415-cc56-4505-91f6-f9700b378625-logs\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:19 crc kubenswrapper[4881]: I0126 13:00:19.999061 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94aabe79-a699-4980-a344-43c629e34627-logs\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.003132 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.003346 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-config-data\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.003398 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b363415-cc56-4505-91f6-f9700b378625-config-data\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.009110 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b363415-cc56-4505-91f6-f9700b378625-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.009744 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.020180 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59wsd\" (UniqueName: \"kubernetes.io/projected/94aabe79-a699-4980-a344-43c629e34627-kube-api-access-59wsd\") pod \"watcher-api-0\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " pod="openstack/watcher-api-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.038092 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr7xt\" (UniqueName: \"kubernetes.io/projected/1b363415-cc56-4505-91f6-f9700b378625-kube-api-access-hr7xt\") pod \"watcher-applier-0\" (UID: \"1b363415-cc56-4505-91f6-f9700b378625\") " pod="openstack/watcher-applier-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.094659 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-logs\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.094711 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-config-data\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.094745 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58m7d\" (UniqueName: \"kubernetes.io/projected/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-kube-api-access-58m7d\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.094775 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.094822 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.095017 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-logs\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.101830 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.105753 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.109132 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-config-data\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.132442 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58m7d\" (UniqueName: \"kubernetes.io/projected/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-kube-api-access-58m7d\") pod \"watcher-decision-engine-0\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.204346 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.216675 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.233166 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.318704 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.377650 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-pr4r7"] Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.377896 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" podUID="75b1a7cd-6382-458b-8769-7f212bd59bf9" containerName="dnsmasq-dns" containerID="cri-o://585971640c82eeec51d4672a38687cebf4e71823e8caa145aebb5ea5be4ac0ac" gracePeriod=10 Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.583276 4881 generic.go:334] "Generic (PLEG): container finished" podID="132298e2-a2f4-4311-9f7a-3e4e08abe34b" containerID="7c4f86fa4c3c9b13b85178d0fb4974e800c168588a5a7177759de691d923a06a" exitCode=0 Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.583363 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lgk7s" event={"ID":"132298e2-a2f4-4311-9f7a-3e4e08abe34b","Type":"ContainerDied","Data":"7c4f86fa4c3c9b13b85178d0fb4974e800c168588a5a7177759de691d923a06a"} Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.595178 4881 generic.go:334] "Generic (PLEG): container finished" podID="75b1a7cd-6382-458b-8769-7f212bd59bf9" containerID="585971640c82eeec51d4672a38687cebf4e71823e8caa145aebb5ea5be4ac0ac" exitCode=0 Jan 26 13:00:20 crc kubenswrapper[4881]: I0126 13:00:20.595232 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" event={"ID":"75b1a7cd-6382-458b-8769-7f212bd59bf9","Type":"ContainerDied","Data":"585971640c82eeec51d4672a38687cebf4e71823e8caa145aebb5ea5be4ac0ac"} Jan 26 13:00:21 crc kubenswrapper[4881]: I0126 13:00:21.761535 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:21 crc kubenswrapper[4881]: I0126 13:00:21.953829 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.627374 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" event={"ID":"75b1a7cd-6382-458b-8769-7f212bd59bf9","Type":"ContainerDied","Data":"774ff8e5d887b38ca8df3a9bf708edae90e80491219ee2ed5e163dd0100ad98c"} Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.628714 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="774ff8e5d887b38ca8df3a9bf708edae90e80491219ee2ed5e163dd0100ad98c" Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.632448 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lgk7s" event={"ID":"132298e2-a2f4-4311-9f7a-3e4e08abe34b","Type":"ContainerDied","Data":"db4823523de0c3d06f30e2cfd0d67f4b5e21207b8acb0859ea7339fd0f469632"} Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.632490 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4823523de0c3d06f30e2cfd0d67f4b5e21207b8acb0859ea7339fd0f469632" Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.718498 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lgk7s" Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.736298 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908117 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-nb\") pod \"75b1a7cd-6382-458b-8769-7f212bd59bf9\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908209 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-combined-ca-bundle\") pod \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908242 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-swift-storage-0\") pod \"75b1a7cd-6382-458b-8769-7f212bd59bf9\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908304 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-config-data\") pod \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908376 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dpnm\" (UniqueName: \"kubernetes.io/projected/75b1a7cd-6382-458b-8769-7f212bd59bf9-kube-api-access-9dpnm\") pod \"75b1a7cd-6382-458b-8769-7f212bd59bf9\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908434 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-svc\") pod \"75b1a7cd-6382-458b-8769-7f212bd59bf9\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908466 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-scripts\") pod \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908490 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-sb\") pod \"75b1a7cd-6382-458b-8769-7f212bd59bf9\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908548 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/132298e2-a2f4-4311-9f7a-3e4e08abe34b-etc-machine-id\") pod \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908606 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-db-sync-config-data\") pod \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908622 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-config\") pod \"75b1a7cd-6382-458b-8769-7f212bd59bf9\" (UID: \"75b1a7cd-6382-458b-8769-7f212bd59bf9\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908642 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvtrh\" (UniqueName: \"kubernetes.io/projected/132298e2-a2f4-4311-9f7a-3e4e08abe34b-kube-api-access-hvtrh\") pod \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\" (UID: \"132298e2-a2f4-4311-9f7a-3e4e08abe34b\") " Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.908980 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/132298e2-a2f4-4311-9f7a-3e4e08abe34b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "132298e2-a2f4-4311-9f7a-3e4e08abe34b" (UID: "132298e2-a2f4-4311-9f7a-3e4e08abe34b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.957879 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b8956874b-lrmsj"] Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.961434 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "132298e2-a2f4-4311-9f7a-3e4e08abe34b" (UID: "132298e2-a2f4-4311-9f7a-3e4e08abe34b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.962280 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/132298e2-a2f4-4311-9f7a-3e4e08abe34b-kube-api-access-hvtrh" (OuterVolumeSpecName: "kube-api-access-hvtrh") pod "132298e2-a2f4-4311-9f7a-3e4e08abe34b" (UID: "132298e2-a2f4-4311-9f7a-3e4e08abe34b"). InnerVolumeSpecName "kube-api-access-hvtrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.962358 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b1a7cd-6382-458b-8769-7f212bd59bf9-kube-api-access-9dpnm" (OuterVolumeSpecName: "kube-api-access-9dpnm") pod "75b1a7cd-6382-458b-8769-7f212bd59bf9" (UID: "75b1a7cd-6382-458b-8769-7f212bd59bf9"). InnerVolumeSpecName "kube-api-access-9dpnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:23 crc kubenswrapper[4881]: I0126 13:00:23.964913 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-scripts" (OuterVolumeSpecName: "scripts") pod "132298e2-a2f4-4311-9f7a-3e4e08abe34b" (UID: "132298e2-a2f4-4311-9f7a-3e4e08abe34b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:23 crc kubenswrapper[4881]: W0126 13:00:23.967908 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ad04d4_bcff_4ed6_8648_be146e3ce20a.slice/crio-312a570c23c3bca543578a5ca05f8c4930be8b43854e6f7e2fa7e6f859b984f8 WatchSource:0}: Error finding container 312a570c23c3bca543578a5ca05f8c4930be8b43854e6f7e2fa7e6f859b984f8: Status 404 returned error can't find the container with id 312a570c23c3bca543578a5ca05f8c4930be8b43854e6f7e2fa7e6f859b984f8 Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.011292 4881 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/132298e2-a2f4-4311-9f7a-3e4e08abe34b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.011324 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.011338 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvtrh\" (UniqueName: \"kubernetes.io/projected/132298e2-a2f4-4311-9f7a-3e4e08abe34b-kube-api-access-hvtrh\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.011351 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dpnm\" (UniqueName: \"kubernetes.io/projected/75b1a7cd-6382-458b-8769-7f212bd59bf9-kube-api-access-9dpnm\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.011361 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: W0126 13:00:24.099109 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b363415_cc56_4505_91f6_f9700b378625.slice/crio-d5001d6378bb632e103973bd25a3b1a7803ddef48c6a05e17c995eaa3a45b8ef WatchSource:0}: Error finding container d5001d6378bb632e103973bd25a3b1a7803ddef48c6a05e17c995eaa3a45b8ef: Status 404 returned error can't find the container with id d5001d6378bb632e103973bd25a3b1a7803ddef48c6a05e17c995eaa3a45b8ef Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.102769 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.102859 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:00:24 crc kubenswrapper[4881]: W0126 13:00:24.106621 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25d5f80d_b0a4_4b8f_a8b7_12f0b7296801.slice/crio-b982f0f12961dd95857b60f4b7aea85726ec88affe7251bf794adc2cf5acb312 WatchSource:0}: Error finding container b982f0f12961dd95857b60f4b7aea85726ec88affe7251bf794adc2cf5acb312: Status 404 returned error can't find the container with id b982f0f12961dd95857b60f4b7aea85726ec88affe7251bf794adc2cf5acb312 Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.134951 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "132298e2-a2f4-4311-9f7a-3e4e08abe34b" (UID: "132298e2-a2f4-4311-9f7a-3e4e08abe34b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:24 crc kubenswrapper[4881]: E0126 13:00:24.146430 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="75a85372-d728-4770-8639-fb6f93e44dab" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.203374 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "75b1a7cd-6382-458b-8769-7f212bd59bf9" (UID: "75b1a7cd-6382-458b-8769-7f212bd59bf9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.208857 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-config" (OuterVolumeSpecName: "config") pod "75b1a7cd-6382-458b-8769-7f212bd59bf9" (UID: "75b1a7cd-6382-458b-8769-7f212bd59bf9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.220324 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.220364 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.220378 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.224469 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-config-data" (OuterVolumeSpecName: "config-data") pod "132298e2-a2f4-4311-9f7a-3e4e08abe34b" (UID: "132298e2-a2f4-4311-9f7a-3e4e08abe34b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.228268 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.236350 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75b1a7cd-6382-458b-8769-7f212bd59bf9" (UID: "75b1a7cd-6382-458b-8769-7f212bd59bf9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.258805 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75b1a7cd-6382-458b-8769-7f212bd59bf9" (UID: "75b1a7cd-6382-458b-8769-7f212bd59bf9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.263870 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75b1a7cd-6382-458b-8769-7f212bd59bf9" (UID: "75b1a7cd-6382-458b-8769-7f212bd59bf9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.322006 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.322049 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.322063 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b1a7cd-6382-458b-8769-7f212bd59bf9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.322077 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132298e2-a2f4-4311-9f7a-3e4e08abe34b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.644169 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75a85372-d728-4770-8639-fb6f93e44dab","Type":"ContainerStarted","Data":"0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.644785 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="ceilometer-notification-agent" containerID="cri-o://d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7" gracePeriod=30 Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.644900 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.645309 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="sg-core" containerID="cri-o://13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9" gracePeriod=30 Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.645425 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="proxy-httpd" containerID="cri-o://0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e" gracePeriod=30 Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.653642 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"94aabe79-a699-4980-a344-43c629e34627","Type":"ContainerStarted","Data":"90918704e3a8ac596751ec6087be0c5e3cdd9f52ee6f366afb89c77c1e4cc088"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.653683 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"94aabe79-a699-4980-a344-43c629e34627","Type":"ContainerStarted","Data":"3066037a5c9feb20b8b56a184c12c2fb962dfbba6fe3cb0e754e8b429e459a57"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.655423 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" event={"ID":"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5","Type":"ContainerStarted","Data":"9ba8c0c45d43343a09aa32e2ee05907001d1f5c9fc7daaca5f1fc0d063fc6987"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.655440 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" event={"ID":"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5","Type":"ContainerStarted","Data":"815f9099b39b94d03c53db1eb1b112bbee889143ea1b04b45e1bb8f3ef1de312"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.661440 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801","Type":"ContainerStarted","Data":"b982f0f12961dd95857b60f4b7aea85726ec88affe7251bf794adc2cf5acb312"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.671826 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmggw" event={"ID":"cf634913-5017-4a94-a3e7-0c337bb9fb4d","Type":"ContainerStarted","Data":"19567a9db0844a952d781ac403219913683d6c92d32d850768773e9cc2919707"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.676785 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" event={"ID":"2c982f75-f27c-4915-a75d-07f2fc53cf19","Type":"ContainerStarted","Data":"b613f892452c0cdc55ef54a9500292fa84a5e061d598e6c5301e7ccf9335348f"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.676825 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" event={"ID":"2c982f75-f27c-4915-a75d-07f2fc53cf19","Type":"ContainerStarted","Data":"933a597cf35e91256fec3bcff5564b859d2a583e4eed6b23a600364b49399098"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.678813 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"1b363415-cc56-4505-91f6-f9700b378625","Type":"ContainerStarted","Data":"d5001d6378bb632e103973bd25a3b1a7803ddef48c6a05e17c995eaa3a45b8ef"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.681465 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-794958b545-pcbpb" event={"ID":"275be453-0a36-451c-8a70-714b958c9625","Type":"ContainerStarted","Data":"d6bce82a970e51b7c1efb0d5a9d3ca1269d4bb774d78917ced2ecfab4f027bd0"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.681498 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-794958b545-pcbpb" event={"ID":"275be453-0a36-451c-8a70-714b958c9625","Type":"ContainerStarted","Data":"71087e04b813566aea960c11c3cc785dbaf87a22c2267a4aa18c732bd80255bc"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.696318 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8956874b-lrmsj" event={"ID":"a3ad04d4-bcff-4ed6-8648-be146e3ce20a","Type":"ContainerStarted","Data":"22473c0b808c382347b0ec80c44ff70b3a88729829332dff890c345d03617872"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.696363 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8956874b-lrmsj" event={"ID":"a3ad04d4-bcff-4ed6-8648-be146e3ce20a","Type":"ContainerStarted","Data":"312a570c23c3bca543578a5ca05f8c4930be8b43854e6f7e2fa7e6f859b984f8"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.698445 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" podStartSLOduration=3.899316433 podStartE2EDuration="15.698426827s" podCreationTimestamp="2026-01-26 13:00:09 +0000 UTC" firstStartedPulling="2026-01-26 13:00:11.364973865 +0000 UTC m=+1483.844283891" lastFinishedPulling="2026-01-26 13:00:23.164084249 +0000 UTC m=+1495.643394285" observedRunningTime="2026-01-26 13:00:24.683012415 +0000 UTC m=+1497.162322441" watchObservedRunningTime="2026-01-26 13:00:24.698426827 +0000 UTC m=+1497.177736853" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.702683 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.706171 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" event={"ID":"256bb8f2-ab6b-4904-bfda-793cd640e966","Type":"ContainerStarted","Data":"c3ca1ec76a11b35765ff128d491bc9e5511933564fd375c31d76e332f7aed4ab"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.706233 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" event={"ID":"256bb8f2-ab6b-4904-bfda-793cd640e966","Type":"ContainerStarted","Data":"f3b942970b170d5c4847e247b14c8b132cdf7adfa28a96ffd9a0478dc60a3194"} Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.706290 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lgk7s" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.719354 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-b5f886498-f6c5n" podStartSLOduration=4.313960351 podStartE2EDuration="15.719324092s" podCreationTimestamp="2026-01-26 13:00:09 +0000 UTC" firstStartedPulling="2026-01-26 13:00:11.760764597 +0000 UTC m=+1484.240074633" lastFinishedPulling="2026-01-26 13:00:23.166128338 +0000 UTC m=+1495.645438374" observedRunningTime="2026-01-26 13:00:24.706054711 +0000 UTC m=+1497.185364727" watchObservedRunningTime="2026-01-26 13:00:24.719324092 +0000 UTC m=+1497.198634138" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.764794 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-794958b545-pcbpb" podStartSLOduration=4.364516188 podStartE2EDuration="15.76477436s" podCreationTimestamp="2026-01-26 13:00:09 +0000 UTC" firstStartedPulling="2026-01-26 13:00:11.8744474 +0000 UTC m=+1484.353757426" lastFinishedPulling="2026-01-26 13:00:23.274705562 +0000 UTC m=+1495.754015598" observedRunningTime="2026-01-26 13:00:24.740920264 +0000 UTC m=+1497.220230290" watchObservedRunningTime="2026-01-26 13:00:24.76477436 +0000 UTC m=+1497.244084386" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.791481 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.791564 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.791611 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.792261 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c058347a35682b737f4fed8273f3335b15404e9e17abbca4140d7f0cbd3f241"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.792311 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://7c058347a35682b737f4fed8273f3335b15404e9e17abbca4140d7f0cbd3f241" gracePeriod=600 Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.796304 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57cc97484d-kbk4q"] Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.803150 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nmggw" podStartSLOduration=33.899057858 podStartE2EDuration="1m5.803133397s" podCreationTimestamp="2026-01-26 12:59:19 +0000 UTC" firstStartedPulling="2026-01-26 12:59:51.506920066 +0000 UTC m=+1463.986230092" lastFinishedPulling="2026-01-26 13:00:23.410995605 +0000 UTC m=+1495.890305631" observedRunningTime="2026-01-26 13:00:24.775073119 +0000 UTC m=+1497.254383145" watchObservedRunningTime="2026-01-26 13:00:24.803133397 +0000 UTC m=+1497.282443423" Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.823167 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7b9f5df6bf-2dqqf"] Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.830638 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-pr4r7"] Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.842438 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-pr4r7"] Jan 26 13:00:24 crc kubenswrapper[4881]: I0126 13:00:24.847354 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" podStartSLOduration=4.478385931 podStartE2EDuration="15.847337395s" podCreationTimestamp="2026-01-26 13:00:09 +0000 UTC" firstStartedPulling="2026-01-26 13:00:11.796436637 +0000 UTC m=+1484.275746663" lastFinishedPulling="2026-01-26 13:00:23.165388081 +0000 UTC m=+1495.644698127" observedRunningTime="2026-01-26 13:00:24.827080916 +0000 UTC m=+1497.306390942" watchObservedRunningTime="2026-01-26 13:00:24.847337395 +0000 UTC m=+1497.326647421" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.011572 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:00:25 crc kubenswrapper[4881]: E0126 13:00:25.021177 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132298e2-a2f4-4311-9f7a-3e4e08abe34b" containerName="cinder-db-sync" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.021210 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="132298e2-a2f4-4311-9f7a-3e4e08abe34b" containerName="cinder-db-sync" Jan 26 13:00:25 crc kubenswrapper[4881]: E0126 13:00:25.021235 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b1a7cd-6382-458b-8769-7f212bd59bf9" containerName="init" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.021241 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b1a7cd-6382-458b-8769-7f212bd59bf9" containerName="init" Jan 26 13:00:25 crc kubenswrapper[4881]: E0126 13:00:25.021274 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b1a7cd-6382-458b-8769-7f212bd59bf9" containerName="dnsmasq-dns" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.021280 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b1a7cd-6382-458b-8769-7f212bd59bf9" containerName="dnsmasq-dns" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.021576 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b1a7cd-6382-458b-8769-7f212bd59bf9" containerName="dnsmasq-dns" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.021590 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="132298e2-a2f4-4311-9f7a-3e4e08abe34b" containerName="cinder-db-sync" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.022534 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.028158 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2m6g5" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.028324 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.028454 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.028579 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.036589 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.065064 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-scripts\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.065123 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krbjp\" (UniqueName: \"kubernetes.io/projected/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-kube-api-access-krbjp\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.065166 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.065233 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.065301 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.065435 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.131596 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-5pqnx"] Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.133365 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.147261 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-5pqnx"] Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.168478 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.168554 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.168637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.168685 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-scripts\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.168704 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krbjp\" (UniqueName: \"kubernetes.io/projected/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-kube-api-access-krbjp\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.168726 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.168804 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.177414 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.195107 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-scripts\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.195231 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.198149 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.200753 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krbjp\" (UniqueName: \"kubernetes.io/projected/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-kube-api-access-krbjp\") pod \"cinder-scheduler-0\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.270173 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-nb\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.270265 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-config\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.270353 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-swift-storage-0\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.270383 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-svc\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.270438 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-sb\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.270457 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhhq9\" (UniqueName: \"kubernetes.io/projected/0172226c-65c1-4419-a039-aa7a84642c0e-kube-api-access-vhhq9\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.281547 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.283231 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.290071 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.309808 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.369948 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.374607 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-config\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.374701 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-swift-storage-0\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.374737 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-svc\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.374795 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-sb\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.374813 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhhq9\" (UniqueName: \"kubernetes.io/projected/0172226c-65c1-4419-a039-aa7a84642c0e-kube-api-access-vhhq9\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.374830 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-nb\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.375567 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-nb\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.376060 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-config\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.376552 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-swift-storage-0\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.377085 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-svc\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.377812 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-sb\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.413752 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhhq9\" (UniqueName: \"kubernetes.io/projected/0172226c-65c1-4419-a039-aa7a84642c0e-kube-api-access-vhhq9\") pod \"dnsmasq-dns-76ddf7d98c-5pqnx\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.476201 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c9354d9-107d-401f-acb1-a80971c2adcc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.476265 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.476316 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data-custom\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.476338 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-scripts\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.476357 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c9354d9-107d-401f-acb1-a80971c2adcc-logs\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.476378 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vk5v\" (UniqueName: \"kubernetes.io/projected/7c9354d9-107d-401f-acb1-a80971c2adcc-kube-api-access-2vk5v\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.476449 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.576246 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.577482 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c9354d9-107d-401f-acb1-a80971c2adcc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.577529 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.577559 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data-custom\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.577580 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-scripts\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.577596 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c9354d9-107d-401f-acb1-a80971c2adcc-logs\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.577612 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vk5v\" (UniqueName: \"kubernetes.io/projected/7c9354d9-107d-401f-acb1-a80971c2adcc-kube-api-access-2vk5v\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.577665 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.578466 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c9354d9-107d-401f-acb1-a80971c2adcc-logs\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.578824 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c9354d9-107d-401f-acb1-a80971c2adcc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.582322 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data-custom\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.584492 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.587767 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-scripts\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.587880 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.609905 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vk5v\" (UniqueName: \"kubernetes.io/projected/7c9354d9-107d-401f-acb1-a80971c2adcc-kube-api-access-2vk5v\") pod \"cinder-api-0\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.625843 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.711928 4881 generic.go:334] "Generic (PLEG): container finished" podID="75a85372-d728-4770-8639-fb6f93e44dab" containerID="0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e" exitCode=0 Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.712155 4881 generic.go:334] "Generic (PLEG): container finished" podID="75a85372-d728-4770-8639-fb6f93e44dab" containerID="13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9" exitCode=2 Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.712108 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75a85372-d728-4770-8639-fb6f93e44dab","Type":"ContainerDied","Data":"0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e"} Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.712220 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75a85372-d728-4770-8639-fb6f93e44dab","Type":"ContainerDied","Data":"13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9"} Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.715742 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="7c058347a35682b737f4fed8273f3335b15404e9e17abbca4140d7f0cbd3f241" exitCode=0 Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.715850 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"7c058347a35682b737f4fed8273f3335b15404e9e17abbca4140d7f0cbd3f241"} Jan 26 13:00:25 crc kubenswrapper[4881]: I0126 13:00:25.715904 4881 scope.go:117] "RemoveContainer" containerID="765ce5b5bd8e274c5be74ed55cfedb20d522de35e2e3c381e48dd1db3daac475" Jan 26 13:00:26 crc kubenswrapper[4881]: I0126 13:00:26.101770 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b1a7cd-6382-458b-8769-7f212bd59bf9" path="/var/lib/kubelet/pods/75b1a7cd-6382-458b-8769-7f212bd59bf9/volumes" Jan 26 13:00:26 crc kubenswrapper[4881]: I0126 13:00:26.282829 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fbb4d475f-pr4r7" podUID="75b1a7cd-6382-458b-8769-7f212bd59bf9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Jan 26 13:00:26 crc kubenswrapper[4881]: I0126 13:00:26.723458 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" podUID="256bb8f2-ab6b-4904-bfda-793cd640e966" containerName="barbican-worker-log" containerID="cri-o://f3b942970b170d5c4847e247b14c8b132cdf7adfa28a96ffd9a0478dc60a3194" gracePeriod=30 Jan 26 13:00:26 crc kubenswrapper[4881]: I0126 13:00:26.723586 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" podUID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" containerName="barbican-keystone-listener-log" containerID="cri-o://815f9099b39b94d03c53db1eb1b112bbee889143ea1b04b45e1bb8f3ef1de312" gracePeriod=30 Jan 26 13:00:26 crc kubenswrapper[4881]: I0126 13:00:26.723640 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" podUID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" containerName="barbican-keystone-listener" containerID="cri-o://9ba8c0c45d43343a09aa32e2ee05907001d1f5c9fc7daaca5f1fc0d063fc6987" gracePeriod=30 Jan 26 13:00:26 crc kubenswrapper[4881]: I0126 13:00:26.723658 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" podUID="256bb8f2-ab6b-4904-bfda-793cd640e966" containerName="barbican-worker" containerID="cri-o://c3ca1ec76a11b35765ff128d491bc9e5511933564fd375c31d76e332f7aed4ab" gracePeriod=30 Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.517904 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.524951 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbqg4\" (UniqueName: \"kubernetes.io/projected/75a85372-d728-4770-8639-fb6f93e44dab-kube-api-access-fbqg4\") pod \"75a85372-d728-4770-8639-fb6f93e44dab\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.525024 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-run-httpd\") pod \"75a85372-d728-4770-8639-fb6f93e44dab\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.525061 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-scripts\") pod \"75a85372-d728-4770-8639-fb6f93e44dab\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.525125 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-combined-ca-bundle\") pod \"75a85372-d728-4770-8639-fb6f93e44dab\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.525158 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-log-httpd\") pod \"75a85372-d728-4770-8639-fb6f93e44dab\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.525189 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-sg-core-conf-yaml\") pod \"75a85372-d728-4770-8639-fb6f93e44dab\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.525285 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-config-data\") pod \"75a85372-d728-4770-8639-fb6f93e44dab\" (UID: \"75a85372-d728-4770-8639-fb6f93e44dab\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.528188 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75a85372-d728-4770-8639-fb6f93e44dab" (UID: "75a85372-d728-4770-8639-fb6f93e44dab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.529163 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75a85372-d728-4770-8639-fb6f93e44dab" (UID: "75a85372-d728-4770-8639-fb6f93e44dab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.540294 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a85372-d728-4770-8639-fb6f93e44dab-kube-api-access-fbqg4" (OuterVolumeSpecName: "kube-api-access-fbqg4") pod "75a85372-d728-4770-8639-fb6f93e44dab" (UID: "75a85372-d728-4770-8639-fb6f93e44dab"). InnerVolumeSpecName "kube-api-access-fbqg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.557586 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-scripts" (OuterVolumeSpecName: "scripts") pod "75a85372-d728-4770-8639-fb6f93e44dab" (UID: "75a85372-d728-4770-8639-fb6f93e44dab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.557649 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-5pqnx"] Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.627136 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.627164 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbqg4\" (UniqueName: \"kubernetes.io/projected/75a85372-d728-4770-8639-fb6f93e44dab-kube-api-access-fbqg4\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.627175 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75a85372-d728-4770-8639-fb6f93e44dab-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.627184 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.636774 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75a85372-d728-4770-8639-fb6f93e44dab" (UID: "75a85372-d728-4770-8639-fb6f93e44dab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.730337 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.741353 4881 generic.go:334] "Generic (PLEG): container finished" podID="75a85372-d728-4770-8639-fb6f93e44dab" containerID="d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7" exitCode=0 Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.747241 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.748433 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75a85372-d728-4770-8639-fb6f93e44dab","Type":"ContainerDied","Data":"d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7"} Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.748470 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75a85372-d728-4770-8639-fb6f93e44dab","Type":"ContainerDied","Data":"504430f782e111a755db2193f6f9f67548cd0d103b8ffbf9239634848e993d2d"} Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.748487 4881 scope.go:117] "RemoveContainer" containerID="0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.756691 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" event={"ID":"0172226c-65c1-4419-a039-aa7a84642c0e","Type":"ContainerStarted","Data":"dac955abb9be3fa1e11a2b83cc7991fac0fe6f9197065aa06d2f05634b262dd7"} Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.758893 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.765438 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.766132 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3"} Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.773449 4881 generic.go:334] "Generic (PLEG): container finished" podID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" containerID="9ba8c0c45d43343a09aa32e2ee05907001d1f5c9fc7daaca5f1fc0d063fc6987" exitCode=0 Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.773470 4881 generic.go:334] "Generic (PLEG): container finished" podID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" containerID="815f9099b39b94d03c53db1eb1b112bbee889143ea1b04b45e1bb8f3ef1de312" exitCode=143 Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.773533 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" event={"ID":"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5","Type":"ContainerDied","Data":"9ba8c0c45d43343a09aa32e2ee05907001d1f5c9fc7daaca5f1fc0d063fc6987"} Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.773552 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" event={"ID":"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5","Type":"ContainerDied","Data":"815f9099b39b94d03c53db1eb1b112bbee889143ea1b04b45e1bb8f3ef1de312"} Jan 26 13:00:27 crc kubenswrapper[4881]: W0126 13:00:27.773617 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c9354d9_107d_401f_acb1_a80971c2adcc.slice/crio-572d147f3fb2888d7ed0c41f790c3fdbe5fdd45f7662460a4c91c7b996b3c844 WatchSource:0}: Error finding container 572d147f3fb2888d7ed0c41f790c3fdbe5fdd45f7662460a4c91c7b996b3c844: Status 404 returned error can't find the container with id 572d147f3fb2888d7ed0c41f790c3fdbe5fdd45f7662460a4c91c7b996b3c844 Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.775045 4881 generic.go:334] "Generic (PLEG): container finished" podID="256bb8f2-ab6b-4904-bfda-793cd640e966" containerID="c3ca1ec76a11b35765ff128d491bc9e5511933564fd375c31d76e332f7aed4ab" exitCode=0 Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.775062 4881 generic.go:334] "Generic (PLEG): container finished" podID="256bb8f2-ab6b-4904-bfda-793cd640e966" containerID="f3b942970b170d5c4847e247b14c8b132cdf7adfa28a96ffd9a0478dc60a3194" exitCode=143 Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.775076 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" event={"ID":"256bb8f2-ab6b-4904-bfda-793cd640e966","Type":"ContainerDied","Data":"c3ca1ec76a11b35765ff128d491bc9e5511933564fd375c31d76e332f7aed4ab"} Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.775089 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" event={"ID":"256bb8f2-ab6b-4904-bfda-793cd640e966","Type":"ContainerDied","Data":"f3b942970b170d5c4847e247b14c8b132cdf7adfa28a96ffd9a0478dc60a3194"} Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.781279 4881 scope.go:117] "RemoveContainer" containerID="13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.845880 4881 scope.go:117] "RemoveContainer" containerID="d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.875050 4881 scope.go:117] "RemoveContainer" containerID="0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.881821 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75a85372-d728-4770-8639-fb6f93e44dab" (UID: "75a85372-d728-4770-8639-fb6f93e44dab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:27 crc kubenswrapper[4881]: E0126 13:00:27.881917 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e\": container with ID starting with 0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e not found: ID does not exist" containerID="0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.882029 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e"} err="failed to get container status \"0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e\": rpc error: code = NotFound desc = could not find container \"0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e\": container with ID starting with 0b739c791d31690d29f756ab788a47c3852d330dd974ae370c2c5c0a65a3205e not found: ID does not exist" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.882059 4881 scope.go:117] "RemoveContainer" containerID="13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9" Jan 26 13:00:27 crc kubenswrapper[4881]: E0126 13:00:27.886212 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9\": container with ID starting with 13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9 not found: ID does not exist" containerID="13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.886246 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9"} err="failed to get container status \"13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9\": rpc error: code = NotFound desc = could not find container \"13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9\": container with ID starting with 13316d63f536bb24e8da99d8b4549245de5750c939335d805eca8680e924a8b9 not found: ID does not exist" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.886269 4881 scope.go:117] "RemoveContainer" containerID="d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7" Jan 26 13:00:27 crc kubenswrapper[4881]: E0126 13:00:27.893684 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7\": container with ID starting with d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7 not found: ID does not exist" containerID="d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.893726 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7"} err="failed to get container status \"d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7\": rpc error: code = NotFound desc = could not find container \"d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7\": container with ID starting with d46f7cf7161199441d7c9c8933fc21a648a05c708a708f826b848e0af651a0d7 not found: ID does not exist" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.907246 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.917365 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-config-data" (OuterVolumeSpecName: "config-data") pod "75a85372-d728-4770-8639-fb6f93e44dab" (UID: "75a85372-d728-4770-8639-fb6f93e44dab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.936148 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfg9q\" (UniqueName: \"kubernetes.io/projected/256bb8f2-ab6b-4904-bfda-793cd640e966-kube-api-access-nfg9q\") pod \"256bb8f2-ab6b-4904-bfda-793cd640e966\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.936272 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256bb8f2-ab6b-4904-bfda-793cd640e966-logs\") pod \"256bb8f2-ab6b-4904-bfda-793cd640e966\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.936341 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data\") pod \"256bb8f2-ab6b-4904-bfda-793cd640e966\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.936365 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-combined-ca-bundle\") pod \"256bb8f2-ab6b-4904-bfda-793cd640e966\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.936426 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data-custom\") pod \"256bb8f2-ab6b-4904-bfda-793cd640e966\" (UID: \"256bb8f2-ab6b-4904-bfda-793cd640e966\") " Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.937194 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.937223 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a85372-d728-4770-8639-fb6f93e44dab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.938780 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/256bb8f2-ab6b-4904-bfda-793cd640e966-logs" (OuterVolumeSpecName: "logs") pod "256bb8f2-ab6b-4904-bfda-793cd640e966" (UID: "256bb8f2-ab6b-4904-bfda-793cd640e966"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.941572 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "256bb8f2-ab6b-4904-bfda-793cd640e966" (UID: "256bb8f2-ab6b-4904-bfda-793cd640e966"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:27 crc kubenswrapper[4881]: I0126 13:00:27.956000 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256bb8f2-ab6b-4904-bfda-793cd640e966-kube-api-access-nfg9q" (OuterVolumeSpecName: "kube-api-access-nfg9q") pod "256bb8f2-ab6b-4904-bfda-793cd640e966" (UID: "256bb8f2-ab6b-4904-bfda-793cd640e966"). InnerVolumeSpecName "kube-api-access-nfg9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.017549 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "256bb8f2-ab6b-4904-bfda-793cd640e966" (UID: "256bb8f2-ab6b-4904-bfda-793cd640e966"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.039563 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256bb8f2-ab6b-4904-bfda-793cd640e966-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.039597 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.039609 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.039618 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfg9q\" (UniqueName: \"kubernetes.io/projected/256bb8f2-ab6b-4904-bfda-793cd640e966-kube-api-access-nfg9q\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.065730 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data" (OuterVolumeSpecName: "config-data") pod "256bb8f2-ab6b-4904-bfda-793cd640e966" (UID: "256bb8f2-ab6b-4904-bfda-793cd640e966"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.069382 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.167186 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.167570 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-combined-ca-bundle\") pod \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.167683 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-logs\") pod \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.167791 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data-custom\") pod \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.168006 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data\") pod \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.168141 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7fqt\" (UniqueName: \"kubernetes.io/projected/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-kube-api-access-b7fqt\") pod \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\" (UID: \"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5\") " Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.169106 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256bb8f2-ab6b-4904-bfda-793cd640e966-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.170398 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-logs" (OuterVolumeSpecName: "logs") pod "c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" (UID: "c2c1ff4a-7d5a-461f-8e63-67288e6a68e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.196872 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-kube-api-access-b7fqt" (OuterVolumeSpecName: "kube-api-access-b7fqt") pod "c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" (UID: "c2c1ff4a-7d5a-461f-8e63-67288e6a68e5"). InnerVolumeSpecName "kube-api-access-b7fqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.274574 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.276244 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.276269 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7fqt\" (UniqueName: \"kubernetes.io/projected/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-kube-api-access-b7fqt\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.276716 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" (UID: "c2c1ff4a-7d5a-461f-8e63-67288e6a68e5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.329498 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.370915 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:00:28 crc kubenswrapper[4881]: E0126 13:00:28.371376 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="ceilometer-notification-agent" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.371391 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="ceilometer-notification-agent" Jan 26 13:00:28 crc kubenswrapper[4881]: E0126 13:00:28.371411 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256bb8f2-ab6b-4904-bfda-793cd640e966" containerName="barbican-worker-log" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.371417 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="256bb8f2-ab6b-4904-bfda-793cd640e966" containerName="barbican-worker-log" Jan 26 13:00:28 crc kubenswrapper[4881]: E0126 13:00:28.371429 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256bb8f2-ab6b-4904-bfda-793cd640e966" containerName="barbican-worker" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.371437 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="256bb8f2-ab6b-4904-bfda-793cd640e966" containerName="barbican-worker" Jan 26 13:00:28 crc kubenswrapper[4881]: E0126 13:00:28.371449 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="sg-core" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.371455 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="sg-core" Jan 26 13:00:28 crc kubenswrapper[4881]: E0126 13:00:28.371468 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="proxy-httpd" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.371474 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="proxy-httpd" Jan 26 13:00:28 crc kubenswrapper[4881]: E0126 13:00:28.371487 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" containerName="barbican-keystone-listener" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.371495 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" containerName="barbican-keystone-listener" Jan 26 13:00:28 crc kubenswrapper[4881]: E0126 13:00:28.371509 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" containerName="barbican-keystone-listener-log" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.371530 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" containerName="barbican-keystone-listener-log" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.371738 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="sg-core" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.372191 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="proxy-httpd" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.372204 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a85372-d728-4770-8639-fb6f93e44dab" containerName="ceilometer-notification-agent" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.372213 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="256bb8f2-ab6b-4904-bfda-793cd640e966" containerName="barbican-worker-log" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.372228 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" containerName="barbican-keystone-listener" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.372236 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" containerName="barbican-keystone-listener-log" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.372270 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="256bb8f2-ab6b-4904-bfda-793cd640e966" containerName="barbican-worker" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.375296 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.378036 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.384911 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.385060 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.387166 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.430737 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" (UID: "c2c1ff4a-7d5a-461f-8e63-67288e6a68e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.477445 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data" (OuterVolumeSpecName: "config-data") pod "c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" (UID: "c2c1ff4a-7d5a-461f-8e63-67288e6a68e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.488394 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-log-httpd\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.489307 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.491973 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-run-httpd\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.492205 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-scripts\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.492356 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-config-data\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.492463 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qsf4\" (UniqueName: \"kubernetes.io/projected/9df4625b-a11a-4524-890a-48e2307edddb-kube-api-access-9qsf4\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.492562 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.493610 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.493702 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.595092 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.595157 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-run-httpd\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.595221 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-scripts\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.595261 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-config-data\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.595287 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qsf4\" (UniqueName: \"kubernetes.io/projected/9df4625b-a11a-4524-890a-48e2307edddb-kube-api-access-9qsf4\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.595311 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.595330 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-log-httpd\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.595787 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-log-httpd\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.595865 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-run-httpd\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.599791 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-scripts\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.599911 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.603945 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-config-data\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.608916 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.610998 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qsf4\" (UniqueName: \"kubernetes.io/projected/9df4625b-a11a-4524-890a-48e2307edddb-kube-api-access-9qsf4\") pod \"ceilometer-0\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.786367 4881 generic.go:334] "Generic (PLEG): container finished" podID="0172226c-65c1-4419-a039-aa7a84642c0e" containerID="4d8d10a4f5c17d2d83795b434692751057de75a9911ddf19e2aafebd038d1929" exitCode=0 Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.786423 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" event={"ID":"0172226c-65c1-4419-a039-aa7a84642c0e","Type":"ContainerDied","Data":"4d8d10a4f5c17d2d83795b434692751057de75a9911ddf19e2aafebd038d1929"} Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.794602 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"1b363415-cc56-4505-91f6-f9700b378625","Type":"ContainerStarted","Data":"38acd00efc5f440b71f2750ad4a65885de9a70d06a2f2d97db90791f08312e97"} Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.796963 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c9354d9-107d-401f-acb1-a80971c2adcc","Type":"ContainerStarted","Data":"572d147f3fb2888d7ed0c41f790c3fdbe5fdd45f7662460a4c91c7b996b3c844"} Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.804301 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"94aabe79-a699-4980-a344-43c629e34627","Type":"ContainerStarted","Data":"fab7c291153a954de871a493b4c43864b62ad49bb5cf5a924a01b96e9f16c4a3"} Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.847326 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.847926 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.848083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cc97484d-kbk4q" event={"ID":"c2c1ff4a-7d5a-461f-8e63-67288e6a68e5","Type":"ContainerDied","Data":"0e7b7a1fe99ebac0ec03e1907fcee25d845c0512ef39966b85cddab6119832ac"} Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.848123 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8956874b-lrmsj" event={"ID":"a3ad04d4-bcff-4ed6-8648-be146e3ce20a","Type":"ContainerStarted","Data":"a0caa53643cf97d39e3de8027a4c28ad8011b35cb69a892248af7ef56ee60f0d"} Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.848135 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801","Type":"ContainerStarted","Data":"501e0408c743324c3680db8a35a6855c76a8ac91e1f944852f20676a31152259"} Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.848145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dbdf91bd-1981-4d81-bc94-26b6a156aa9e","Type":"ContainerStarted","Data":"cb4c420c32379feb74b6e2f4313ee43db6f3f43a1e2f8064941e987fe67e868f"} Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.848164 4881 scope.go:117] "RemoveContainer" containerID="9ba8c0c45d43343a09aa32e2ee05907001d1f5c9fc7daaca5f1fc0d063fc6987" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.851278 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.851568 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b9f5df6bf-2dqqf" event={"ID":"256bb8f2-ab6b-4904-bfda-793cd640e966","Type":"ContainerDied","Data":"80ef46edf778391f784bce86ee073f10a5e661f08ea3dc2209b973b1e7782427"} Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.851861 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.856005 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=6.876962657 podStartE2EDuration="9.855994436s" podCreationTimestamp="2026-01-26 13:00:19 +0000 UTC" firstStartedPulling="2026-01-26 13:00:24.101689776 +0000 UTC m=+1496.580999792" lastFinishedPulling="2026-01-26 13:00:27.080721545 +0000 UTC m=+1499.560031571" observedRunningTime="2026-01-26 13:00:28.847440519 +0000 UTC m=+1501.326750545" watchObservedRunningTime="2026-01-26 13:00:28.855994436 +0000 UTC m=+1501.335304462" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.904817 4881 scope.go:117] "RemoveContainer" containerID="815f9099b39b94d03c53db1eb1b112bbee889143ea1b04b45e1bb8f3ef1de312" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.909461 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=9.909445817 podStartE2EDuration="9.909445817s" podCreationTimestamp="2026-01-26 13:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:28.864084111 +0000 UTC m=+1501.343394137" watchObservedRunningTime="2026-01-26 13:00:28.909445817 +0000 UTC m=+1501.388755843" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.940425 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=7.030729792 podStartE2EDuration="9.940405335s" podCreationTimestamp="2026-01-26 13:00:19 +0000 UTC" firstStartedPulling="2026-01-26 13:00:24.122258503 +0000 UTC m=+1496.601568539" lastFinishedPulling="2026-01-26 13:00:27.031934056 +0000 UTC m=+1499.511244082" observedRunningTime="2026-01-26 13:00:28.879566935 +0000 UTC m=+1501.358876971" watchObservedRunningTime="2026-01-26 13:00:28.940405335 +0000 UTC m=+1501.419715361" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.954638 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7b9f5df6bf-2dqqf"] Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.968070 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b8956874b-lrmsj" podStartSLOduration=16.968052354 podStartE2EDuration="16.968052354s" podCreationTimestamp="2026-01-26 13:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:28.917164794 +0000 UTC m=+1501.396474830" watchObservedRunningTime="2026-01-26 13:00:28.968052354 +0000 UTC m=+1501.447362370" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.968927 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7b9f5df6bf-2dqqf"] Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.973747 4881 scope.go:117] "RemoveContainer" containerID="c3ca1ec76a11b35765ff128d491bc9e5511933564fd375c31d76e332f7aed4ab" Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.987583 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57cc97484d-kbk4q"] Jan 26 13:00:28 crc kubenswrapper[4881]: I0126 13:00:28.997004 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-57cc97484d-kbk4q"] Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.025685 4881 scope.go:117] "RemoveContainer" containerID="f3b942970b170d5c4847e247b14c8b132cdf7adfa28a96ffd9a0478dc60a3194" Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.503982 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.870204 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c9354d9-107d-401f-acb1-a80971c2adcc","Type":"ContainerStarted","Data":"40ff402a56985b1666dcfc8811d8f475690409f2673e05e00beee846c2fcea06"} Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.873267 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" event={"ID":"0172226c-65c1-4419-a039-aa7a84642c0e","Type":"ContainerStarted","Data":"684049ef9e9657561d9cc87f14ca15f1202193eec126e530c95736d286039007"} Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.875182 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.883176 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dbdf91bd-1981-4d81-bc94-26b6a156aa9e","Type":"ContainerStarted","Data":"3e812eee12ffe504c5b25c875dd765f92f74a124f44167e3baf0f9cf97041844"} Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.896103 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df4625b-a11a-4524-890a-48e2307edddb","Type":"ContainerStarted","Data":"889dd9c30f4e4df926bff6bed7dbf1423d28ea1cac1506bfb53404e078ed03e9"} Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.896159 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df4625b-a11a-4524-890a-48e2307edddb","Type":"ContainerStarted","Data":"5599e49064e1b6c4300508468a6d625815bd372ca51aa72e1ec1ee3df0586c44"} Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.897310 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.897374 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.902236 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" podStartSLOduration=4.902217188 podStartE2EDuration="4.902217188s" podCreationTimestamp="2026-01-26 13:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:29.890131786 +0000 UTC m=+1502.369441812" watchObservedRunningTime="2026-01-26 13:00:29.902217188 +0000 UTC m=+1502.381527214" Jan 26 13:00:29 crc kubenswrapper[4881]: I0126 13:00:29.952112 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c67646cfd-kppgm" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.157:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8443: connect: connection refused" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.104674 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256bb8f2-ab6b-4904-bfda-793cd640e966" path="/var/lib/kubelet/pods/256bb8f2-ab6b-4904-bfda-793cd640e966/volumes" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.105596 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a85372-d728-4770-8639-fb6f93e44dab" path="/var/lib/kubelet/pods/75a85372-d728-4770-8639-fb6f93e44dab/volumes" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.106256 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c1ff4a-7d5a-461f-8e63-67288e6a68e5" path="/var/lib/kubelet/pods/c2c1ff4a-7d5a-461f-8e63-67288e6a68e5/volumes" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.204973 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.205037 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.217962 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.218025 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.233988 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.240096 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.265530 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.908254 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c9354d9-107d-401f-acb1-a80971c2adcc","Type":"ContainerStarted","Data":"eaab8fbfe5cce69e6e4b69beaac271f2e2ad2bf6920445f85880548fd8ffc611"} Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.908610 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7c9354d9-107d-401f-acb1-a80971c2adcc" containerName="cinder-api-log" containerID="cri-o://40ff402a56985b1666dcfc8811d8f475690409f2673e05e00beee846c2fcea06" gracePeriod=30 Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.908772 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7c9354d9-107d-401f-acb1-a80971c2adcc" containerName="cinder-api" containerID="cri-o://eaab8fbfe5cce69e6e4b69beaac271f2e2ad2bf6920445f85880548fd8ffc611" gracePeriod=30 Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.908867 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.918349 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dbdf91bd-1981-4d81-bc94-26b6a156aa9e","Type":"ContainerStarted","Data":"1dd952ee3f6093373f8b30c66a97606235cb558489c5869996dcf8f972a918c3"} Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.928092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df4625b-a11a-4524-890a-48e2307edddb","Type":"ContainerStarted","Data":"5f8c2e81baaa9f02854a6302bb6e297e08854046e0345c962084c5f72bd64761"} Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.928181 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.928967 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.947072 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.947055866 podStartE2EDuration="5.947055866s" podCreationTimestamp="2026-01-26 13:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:30.942675171 +0000 UTC m=+1503.421985197" watchObservedRunningTime="2026-01-26 13:00:30.947055866 +0000 UTC m=+1503.426365892" Jan 26 13:00:30 crc kubenswrapper[4881]: I0126 13:00:30.970263 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.020750 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.025818 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.660261806 podStartE2EDuration="7.02579712s" podCreationTimestamp="2026-01-26 13:00:24 +0000 UTC" firstStartedPulling="2026-01-26 13:00:27.781662114 +0000 UTC m=+1500.260972140" lastFinishedPulling="2026-01-26 13:00:28.147197438 +0000 UTC m=+1500.626507454" observedRunningTime="2026-01-26 13:00:31.001893562 +0000 UTC m=+1503.481203588" watchObservedRunningTime="2026-01-26 13:00:31.02579712 +0000 UTC m=+1503.505107146" Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.258705 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.554290 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.939899 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df4625b-a11a-4524-890a-48e2307edddb","Type":"ContainerStarted","Data":"7509c95cfa9b8ca9bd4786703a0355719d70ece332775a0618983991eeed60e1"} Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.942588 4881 generic.go:334] "Generic (PLEG): container finished" podID="7c9354d9-107d-401f-acb1-a80971c2adcc" containerID="eaab8fbfe5cce69e6e4b69beaac271f2e2ad2bf6920445f85880548fd8ffc611" exitCode=0 Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.942628 4881 generic.go:334] "Generic (PLEG): container finished" podID="7c9354d9-107d-401f-acb1-a80971c2adcc" containerID="40ff402a56985b1666dcfc8811d8f475690409f2673e05e00beee846c2fcea06" exitCode=143 Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.943443 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c9354d9-107d-401f-acb1-a80971c2adcc","Type":"ContainerDied","Data":"eaab8fbfe5cce69e6e4b69beaac271f2e2ad2bf6920445f85880548fd8ffc611"} Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.943466 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c9354d9-107d-401f-acb1-a80971c2adcc","Type":"ContainerDied","Data":"40ff402a56985b1666dcfc8811d8f475690409f2673e05e00beee846c2fcea06"} Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.943475 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c9354d9-107d-401f-acb1-a80971c2adcc","Type":"ContainerDied","Data":"572d147f3fb2888d7ed0c41f790c3fdbe5fdd45f7662460a4c91c7b996b3c844"} Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.943484 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="572d147f3fb2888d7ed0c41f790c3fdbe5fdd45f7662460a4c91c7b996b3c844" Jan 26 13:00:31 crc kubenswrapper[4881]: I0126 13:00:31.985484 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.066598 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vk5v\" (UniqueName: \"kubernetes.io/projected/7c9354d9-107d-401f-acb1-a80971c2adcc-kube-api-access-2vk5v\") pod \"7c9354d9-107d-401f-acb1-a80971c2adcc\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.066988 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-scripts\") pod \"7c9354d9-107d-401f-acb1-a80971c2adcc\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.067138 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data-custom\") pod \"7c9354d9-107d-401f-acb1-a80971c2adcc\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.067273 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c9354d9-107d-401f-acb1-a80971c2adcc-etc-machine-id\") pod \"7c9354d9-107d-401f-acb1-a80971c2adcc\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.067388 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data\") pod \"7c9354d9-107d-401f-acb1-a80971c2adcc\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.067423 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c9354d9-107d-401f-acb1-a80971c2adcc-logs\") pod \"7c9354d9-107d-401f-acb1-a80971c2adcc\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.067480 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-combined-ca-bundle\") pod \"7c9354d9-107d-401f-acb1-a80971c2adcc\" (UID: \"7c9354d9-107d-401f-acb1-a80971c2adcc\") " Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.067913 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c9354d9-107d-401f-acb1-a80971c2adcc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7c9354d9-107d-401f-acb1-a80971c2adcc" (UID: "7c9354d9-107d-401f-acb1-a80971c2adcc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.068403 4881 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c9354d9-107d-401f-acb1-a80971c2adcc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.068802 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c9354d9-107d-401f-acb1-a80971c2adcc-logs" (OuterVolumeSpecName: "logs") pod "7c9354d9-107d-401f-acb1-a80971c2adcc" (UID: "7c9354d9-107d-401f-acb1-a80971c2adcc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.073974 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7c9354d9-107d-401f-acb1-a80971c2adcc" (UID: "7c9354d9-107d-401f-acb1-a80971c2adcc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.074834 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c9354d9-107d-401f-acb1-a80971c2adcc-kube-api-access-2vk5v" (OuterVolumeSpecName: "kube-api-access-2vk5v") pod "7c9354d9-107d-401f-acb1-a80971c2adcc" (UID: "7c9354d9-107d-401f-acb1-a80971c2adcc"). InnerVolumeSpecName "kube-api-access-2vk5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.076874 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-scripts" (OuterVolumeSpecName: "scripts") pod "7c9354d9-107d-401f-acb1-a80971c2adcc" (UID: "7c9354d9-107d-401f-acb1-a80971c2adcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.112230 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c9354d9-107d-401f-acb1-a80971c2adcc" (UID: "7c9354d9-107d-401f-acb1-a80971c2adcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.127617 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data" (OuterVolumeSpecName: "config-data") pod "7c9354d9-107d-401f-acb1-a80971c2adcc" (UID: "7c9354d9-107d-401f-acb1-a80971c2adcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.170180 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vk5v\" (UniqueName: \"kubernetes.io/projected/7c9354d9-107d-401f-acb1-a80971c2adcc-kube-api-access-2vk5v\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.170215 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.170225 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.170234 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.170245 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c9354d9-107d-401f-acb1-a80971c2adcc-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.170253 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9354d9-107d-401f-acb1-a80971c2adcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.359729 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.422640 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c77fn"] Jan 26 13:00:32 crc kubenswrapper[4881]: E0126 13:00:32.423217 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9354d9-107d-401f-acb1-a80971c2adcc" containerName="cinder-api-log" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.423707 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9354d9-107d-401f-acb1-a80971c2adcc" containerName="cinder-api-log" Jan 26 13:00:32 crc kubenswrapper[4881]: E0126 13:00:32.423794 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9354d9-107d-401f-acb1-a80971c2adcc" containerName="cinder-api" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.423845 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9354d9-107d-401f-acb1-a80971c2adcc" containerName="cinder-api" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.424088 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9354d9-107d-401f-acb1-a80971c2adcc" containerName="cinder-api-log" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.424156 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9354d9-107d-401f-acb1-a80971c2adcc" containerName="cinder-api" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.426079 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.438182 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c77fn"] Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.476762 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-catalog-content\") pod \"certified-operators-c77fn\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.476810 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-utilities\") pod \"certified-operators-c77fn\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.476960 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csc5b\" (UniqueName: \"kubernetes.io/projected/e1de9071-e022-48f1-99cf-7344c70cadad-kube-api-access-csc5b\") pod \"certified-operators-c77fn\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.578480 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-catalog-content\") pod \"certified-operators-c77fn\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.578549 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-utilities\") pod \"certified-operators-c77fn\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.578695 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csc5b\" (UniqueName: \"kubernetes.io/projected/e1de9071-e022-48f1-99cf-7344c70cadad-kube-api-access-csc5b\") pod \"certified-operators-c77fn\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.579571 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-catalog-content\") pod \"certified-operators-c77fn\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.579851 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-utilities\") pod \"certified-operators-c77fn\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.601279 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csc5b\" (UniqueName: \"kubernetes.io/projected/e1de9071-e022-48f1-99cf-7344c70cadad-kube-api-access-csc5b\") pod \"certified-operators-c77fn\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.760502 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.962302 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df4625b-a11a-4524-890a-48e2307edddb","Type":"ContainerStarted","Data":"d767779ab1ddd53cb13094f375fd7ec10e5420bbb2964d6e583e6f43fe75376c"} Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.963345 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.964680 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:00:32 crc kubenswrapper[4881]: I0126 13:00:32.989050 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.137083113 podStartE2EDuration="4.989035602s" podCreationTimestamp="2026-01-26 13:00:28 +0000 UTC" firstStartedPulling="2026-01-26 13:00:29.50792698 +0000 UTC m=+1501.987237006" lastFinishedPulling="2026-01-26 13:00:32.359879469 +0000 UTC m=+1504.839189495" observedRunningTime="2026-01-26 13:00:32.980120777 +0000 UTC m=+1505.459430803" watchObservedRunningTime="2026-01-26 13:00:32.989035602 +0000 UTC m=+1505.468345628" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.021557 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.035027 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.075084 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.082341 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.086393 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.086768 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.087029 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.140462 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.321236 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.321493 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.321560 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.321581 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-logs\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.321630 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-config-data\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.321656 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.321685 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf62n\" (UniqueName: \"kubernetes.io/projected/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-kube-api-access-kf62n\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.321709 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-scripts\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.321744 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.423377 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.423422 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.423478 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.423497 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-logs\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.423555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-config-data\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.423583 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.423610 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf62n\" (UniqueName: \"kubernetes.io/projected/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-kube-api-access-kf62n\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.423634 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-scripts\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.423649 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.424082 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.424632 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-logs\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.430157 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.430549 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.430935 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.433409 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-scripts\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.434454 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.435795 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-config-data\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.451912 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf62n\" (UniqueName: \"kubernetes.io/projected/c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1-kube-api-access-kf62n\") pod \"cinder-api-0\" (UID: \"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1\") " pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.489269 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c77fn"] Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.713592 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.953223 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b8956874b-lrmsj" Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.987414 4881 generic.go:334] "Generic (PLEG): container finished" podID="e1de9071-e022-48f1-99cf-7344c70cadad" containerID="02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b" exitCode=0 Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.987555 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c77fn" event={"ID":"e1de9071-e022-48f1-99cf-7344c70cadad","Type":"ContainerDied","Data":"02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b"} Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.987629 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c77fn" event={"ID":"e1de9071-e022-48f1-99cf-7344c70cadad","Type":"ContainerStarted","Data":"5476cd11744e65c56029efbb41fa4fd8056f08337cd07db4b8fb661549a65f55"} Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.992794 4881 generic.go:334] "Generic (PLEG): container finished" podID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerID="501e0408c743324c3680db8a35a6855c76a8ac91e1f944852f20676a31152259" exitCode=1 Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.993422 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801","Type":"ContainerDied","Data":"501e0408c743324c3680db8a35a6855c76a8ac91e1f944852f20676a31152259"} Jan 26 13:00:33 crc kubenswrapper[4881]: I0126 13:00:33.994272 4881 scope.go:117] "RemoveContainer" containerID="501e0408c743324c3680db8a35a6855c76a8ac91e1f944852f20676a31152259" Jan 26 13:00:34 crc kubenswrapper[4881]: I0126 13:00:34.021380 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75d7497f7d-ksr89"] Jan 26 13:00:34 crc kubenswrapper[4881]: I0126 13:00:34.021863 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75d7497f7d-ksr89" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerName="barbican-api-log" containerID="cri-o://24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2" gracePeriod=30 Jan 26 13:00:34 crc kubenswrapper[4881]: I0126 13:00:34.022191 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75d7497f7d-ksr89" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerName="barbican-api" containerID="cri-o://17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45" gracePeriod=30 Jan 26 13:00:34 crc kubenswrapper[4881]: I0126 13:00:34.129862 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c9354d9-107d-401f-acb1-a80971c2adcc" path="/var/lib/kubelet/pods/7c9354d9-107d-401f-acb1-a80971c2adcc/volumes" Jan 26 13:00:34 crc kubenswrapper[4881]: I0126 13:00:34.148459 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:00:34 crc kubenswrapper[4881]: W0126 13:00:34.184088 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0c9df05_a8bd_4e6a_8d42_3aa4b5a38bb1.slice/crio-b56541821efda89d44c3a4816f1d37f0145ead6b72f381ddab8d1efe66b742cb WatchSource:0}: Error finding container b56541821efda89d44c3a4816f1d37f0145ead6b72f381ddab8d1efe66b742cb: Status 404 returned error can't find the container with id b56541821efda89d44c3a4816f1d37f0145ead6b72f381ddab8d1efe66b742cb Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.004689 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801","Type":"ContainerStarted","Data":"1155fe2ececa788009d295d6b3f0350b265a62f45095f8a599dcda64dc323a0e"} Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.018844 4881 generic.go:334] "Generic (PLEG): container finished" podID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerID="24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2" exitCode=143 Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.018911 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75d7497f7d-ksr89" event={"ID":"748ff944-3c37-434b-8ce3-6cff7e627ea0","Type":"ContainerDied","Data":"24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2"} Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.024212 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1","Type":"ContainerStarted","Data":"b56541821efda89d44c3a4816f1d37f0145ead6b72f381ddab8d1efe66b742cb"} Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.371189 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.577353 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.638278 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dc855bcb7-bhvs2"] Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.638664 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" podUID="580e9345-d40f-4136-a7b3-9d5dc370cea7" containerName="dnsmasq-dns" containerID="cri-o://9dfecc08f42ddbe09db9e1e13bafa58158ead8389ee932b570c14bf036ffa0b0" gracePeriod=10 Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.761995 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.921597 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75d7497f7d-ksr89" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": read tcp 10.217.0.2:43852->10.217.0.168:9311: read: connection reset by peer" Jan 26 13:00:35 crc kubenswrapper[4881]: I0126 13:00:35.921621 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75d7497f7d-ksr89" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": read tcp 10.217.0.2:43860->10.217.0.168:9311: read: connection reset by peer" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.076632 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1","Type":"ContainerStarted","Data":"7489a46648010b66916baa1671fee7a2d80f74fb4baea1211cc8ee47db598be7"} Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.094562 4881 generic.go:334] "Generic (PLEG): container finished" podID="e1de9071-e022-48f1-99cf-7344c70cadad" containerID="692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64" exitCode=0 Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.102360 4881 generic.go:334] "Generic (PLEG): container finished" podID="580e9345-d40f-4136-a7b3-9d5dc370cea7" containerID="9dfecc08f42ddbe09db9e1e13bafa58158ead8389ee932b570c14bf036ffa0b0" exitCode=0 Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.139703 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c77fn" event={"ID":"e1de9071-e022-48f1-99cf-7344c70cadad","Type":"ContainerDied","Data":"692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64"} Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.139745 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" event={"ID":"580e9345-d40f-4136-a7b3-9d5dc370cea7","Type":"ContainerDied","Data":"9dfecc08f42ddbe09db9e1e13bafa58158ead8389ee932b570c14bf036ffa0b0"} Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.155115 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.336494 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.457240 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cwq6\" (UniqueName: \"kubernetes.io/projected/580e9345-d40f-4136-a7b3-9d5dc370cea7-kube-api-access-9cwq6\") pod \"580e9345-d40f-4136-a7b3-9d5dc370cea7\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.457375 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-config\") pod \"580e9345-d40f-4136-a7b3-9d5dc370cea7\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.457401 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-sb\") pod \"580e9345-d40f-4136-a7b3-9d5dc370cea7\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.457424 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-nb\") pod \"580e9345-d40f-4136-a7b3-9d5dc370cea7\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.457500 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-swift-storage-0\") pod \"580e9345-d40f-4136-a7b3-9d5dc370cea7\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.457570 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-svc\") pod \"580e9345-d40f-4136-a7b3-9d5dc370cea7\" (UID: \"580e9345-d40f-4136-a7b3-9d5dc370cea7\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.475268 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580e9345-d40f-4136-a7b3-9d5dc370cea7-kube-api-access-9cwq6" (OuterVolumeSpecName: "kube-api-access-9cwq6") pod "580e9345-d40f-4136-a7b3-9d5dc370cea7" (UID: "580e9345-d40f-4136-a7b3-9d5dc370cea7"). InnerVolumeSpecName "kube-api-access-9cwq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.560833 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cwq6\" (UniqueName: \"kubernetes.io/projected/580e9345-d40f-4136-a7b3-9d5dc370cea7-kube-api-access-9cwq6\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.570384 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "580e9345-d40f-4136-a7b3-9d5dc370cea7" (UID: "580e9345-d40f-4136-a7b3-9d5dc370cea7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.576956 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "580e9345-d40f-4136-a7b3-9d5dc370cea7" (UID: "580e9345-d40f-4136-a7b3-9d5dc370cea7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.586899 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "580e9345-d40f-4136-a7b3-9d5dc370cea7" (UID: "580e9345-d40f-4136-a7b3-9d5dc370cea7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.604147 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-config" (OuterVolumeSpecName: "config") pod "580e9345-d40f-4136-a7b3-9d5dc370cea7" (UID: "580e9345-d40f-4136-a7b3-9d5dc370cea7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.609883 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "580e9345-d40f-4136-a7b3-9d5dc370cea7" (UID: "580e9345-d40f-4136-a7b3-9d5dc370cea7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.662434 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.662468 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.662479 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.662488 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.662497 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/580e9345-d40f-4136-a7b3-9d5dc370cea7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.673889 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.865197 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-combined-ca-bundle\") pod \"748ff944-3c37-434b-8ce3-6cff7e627ea0\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.865568 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data-custom\") pod \"748ff944-3c37-434b-8ce3-6cff7e627ea0\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.865600 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcgfv\" (UniqueName: \"kubernetes.io/projected/748ff944-3c37-434b-8ce3-6cff7e627ea0-kube-api-access-fcgfv\") pod \"748ff944-3c37-434b-8ce3-6cff7e627ea0\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.865698 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data\") pod \"748ff944-3c37-434b-8ce3-6cff7e627ea0\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.865721 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748ff944-3c37-434b-8ce3-6cff7e627ea0-logs\") pod \"748ff944-3c37-434b-8ce3-6cff7e627ea0\" (UID: \"748ff944-3c37-434b-8ce3-6cff7e627ea0\") " Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.866683 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748ff944-3c37-434b-8ce3-6cff7e627ea0-logs" (OuterVolumeSpecName: "logs") pod "748ff944-3c37-434b-8ce3-6cff7e627ea0" (UID: "748ff944-3c37-434b-8ce3-6cff7e627ea0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.871913 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "748ff944-3c37-434b-8ce3-6cff7e627ea0" (UID: "748ff944-3c37-434b-8ce3-6cff7e627ea0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.872209 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748ff944-3c37-434b-8ce3-6cff7e627ea0-kube-api-access-fcgfv" (OuterVolumeSpecName: "kube-api-access-fcgfv") pod "748ff944-3c37-434b-8ce3-6cff7e627ea0" (UID: "748ff944-3c37-434b-8ce3-6cff7e627ea0"). InnerVolumeSpecName "kube-api-access-fcgfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.894025 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "748ff944-3c37-434b-8ce3-6cff7e627ea0" (UID: "748ff944-3c37-434b-8ce3-6cff7e627ea0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.912990 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data" (OuterVolumeSpecName: "config-data") pod "748ff944-3c37-434b-8ce3-6cff7e627ea0" (UID: "748ff944-3c37-434b-8ce3-6cff7e627ea0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.968538 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.968568 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.968583 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcgfv\" (UniqueName: \"kubernetes.io/projected/748ff944-3c37-434b-8ce3-6cff7e627ea0-kube-api-access-fcgfv\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.968595 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748ff944-3c37-434b-8ce3-6cff7e627ea0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:36 crc kubenswrapper[4881]: I0126 13:00:36.968607 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748ff944-3c37-434b-8ce3-6cff7e627ea0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.113291 4881 generic.go:334] "Generic (PLEG): container finished" podID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerID="17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45" exitCode=0 Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.113360 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75d7497f7d-ksr89" event={"ID":"748ff944-3c37-434b-8ce3-6cff7e627ea0","Type":"ContainerDied","Data":"17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45"} Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.113367 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75d7497f7d-ksr89" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.113391 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75d7497f7d-ksr89" event={"ID":"748ff944-3c37-434b-8ce3-6cff7e627ea0","Type":"ContainerDied","Data":"a565aa7425ac9eabb2ed694239f00364f4b62097a58afb17231d592be47671e4"} Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.113410 4881 scope.go:117] "RemoveContainer" containerID="17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.115256 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1","Type":"ContainerStarted","Data":"52f3f1475d6160ea33a842a1cafa6bd2a7fda01334b53038ba573d6b08d439c1"} Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.116040 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.122887 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" event={"ID":"580e9345-d40f-4136-a7b3-9d5dc370cea7","Type":"ContainerDied","Data":"89aaad3d5fe744ce77573e07232705391948150ad5dc14b08d432f9599af7fff"} Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.122963 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc855bcb7-bhvs2" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.136501 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c77fn" event={"ID":"e1de9071-e022-48f1-99cf-7344c70cadad","Type":"ContainerStarted","Data":"9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482"} Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.139091 4881 generic.go:334] "Generic (PLEG): container finished" podID="a410393d-b0c5-45bf-b9f7-897ad16759d4" containerID="575479e07039a478f0162256f3814334444e55407e26307b5894052039080d79" exitCode=0 Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.139299 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" containerName="cinder-scheduler" containerID="cri-o://3e812eee12ffe504c5b25c875dd765f92f74a124f44167e3baf0f9cf97041844" gracePeriod=30 Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.139500 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hmsfj" event={"ID":"a410393d-b0c5-45bf-b9f7-897ad16759d4","Type":"ContainerDied","Data":"575479e07039a478f0162256f3814334444e55407e26307b5894052039080d79"} Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.139647 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" containerName="probe" containerID="cri-o://1dd952ee3f6093373f8b30c66a97606235cb558489c5869996dcf8f972a918c3" gracePeriod=30 Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.140024 4881 scope.go:117] "RemoveContainer" containerID="24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.153741 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.153725783 podStartE2EDuration="4.153725783s" podCreationTimestamp="2026-01-26 13:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:37.150736961 +0000 UTC m=+1509.630046997" watchObservedRunningTime="2026-01-26 13:00:37.153725783 +0000 UTC m=+1509.633035809" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.178936 4881 scope.go:117] "RemoveContainer" containerID="17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45" Jan 26 13:00:37 crc kubenswrapper[4881]: E0126 13:00:37.179982 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45\": container with ID starting with 17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45 not found: ID does not exist" containerID="17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.180025 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45"} err="failed to get container status \"17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45\": rpc error: code = NotFound desc = could not find container \"17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45\": container with ID starting with 17ea56bfaf9eee4f0c46a3aac998e2f0028dfd076af4019d7427646f1e00eb45 not found: ID does not exist" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.180050 4881 scope.go:117] "RemoveContainer" containerID="24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2" Jan 26 13:00:37 crc kubenswrapper[4881]: E0126 13:00:37.180448 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2\": container with ID starting with 24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2 not found: ID does not exist" containerID="24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.180478 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2"} err="failed to get container status \"24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2\": rpc error: code = NotFound desc = could not find container \"24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2\": container with ID starting with 24f9334e9a81034c438de52da23d1d45b8e1eca242201060c3f3699815ecd6d2 not found: ID does not exist" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.180496 4881 scope.go:117] "RemoveContainer" containerID="9dfecc08f42ddbe09db9e1e13bafa58158ead8389ee932b570c14bf036ffa0b0" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.180772 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75d7497f7d-ksr89"] Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.195347 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-75d7497f7d-ksr89"] Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.201306 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c77fn" podStartSLOduration=2.446146224 podStartE2EDuration="5.201282592s" podCreationTimestamp="2026-01-26 13:00:32 +0000 UTC" firstStartedPulling="2026-01-26 13:00:33.989897408 +0000 UTC m=+1506.469207454" lastFinishedPulling="2026-01-26 13:00:36.745033806 +0000 UTC m=+1509.224343822" observedRunningTime="2026-01-26 13:00:37.188173816 +0000 UTC m=+1509.667483852" watchObservedRunningTime="2026-01-26 13:00:37.201282592 +0000 UTC m=+1509.680592618" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.219906 4881 scope.go:117] "RemoveContainer" containerID="50ca60c947435e94c14f8b5967447038192cbc61fb3fe4b420c7c4a1b8f16964" Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.236298 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dc855bcb7-bhvs2"] Jan 26 13:00:37 crc kubenswrapper[4881]: I0126 13:00:37.248368 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dc855bcb7-bhvs2"] Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.117360 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580e9345-d40f-4136-a7b3-9d5dc370cea7" path="/var/lib/kubelet/pods/580e9345-d40f-4136-a7b3-9d5dc370cea7/volumes" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.120061 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" path="/var/lib/kubelet/pods/748ff944-3c37-434b-8ce3-6cff7e627ea0/volumes" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.158676 4881 generic.go:334] "Generic (PLEG): container finished" podID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerID="1155fe2ececa788009d295d6b3f0350b265a62f45095f8a599dcda64dc323a0e" exitCode=1 Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.158741 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801","Type":"ContainerDied","Data":"1155fe2ececa788009d295d6b3f0350b265a62f45095f8a599dcda64dc323a0e"} Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.158869 4881 scope.go:117] "RemoveContainer" containerID="501e0408c743324c3680db8a35a6855c76a8ac91e1f944852f20676a31152259" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.164644 4881 generic.go:334] "Generic (PLEG): container finished" podID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" containerID="1dd952ee3f6093373f8b30c66a97606235cb558489c5869996dcf8f972a918c3" exitCode=0 Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.164744 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dbdf91bd-1981-4d81-bc94-26b6a156aa9e","Type":"ContainerDied","Data":"1dd952ee3f6093373f8b30c66a97606235cb558489c5869996dcf8f972a918c3"} Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.170981 4881 scope.go:117] "RemoveContainer" containerID="1155fe2ececa788009d295d6b3f0350b265a62f45095f8a599dcda64dc323a0e" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.171775 4881 generic.go:334] "Generic (PLEG): container finished" podID="cf634913-5017-4a94-a3e7-0c337bb9fb4d" containerID="19567a9db0844a952d781ac403219913683d6c92d32d850768773e9cc2919707" exitCode=0 Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.171901 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmggw" event={"ID":"cf634913-5017-4a94-a3e7-0c337bb9fb4d","Type":"ContainerDied","Data":"19567a9db0844a952d781ac403219913683d6c92d32d850768773e9cc2919707"} Jan 26 13:00:38 crc kubenswrapper[4881]: E0126 13:00:38.172646 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(25d5f80d-b0a4-4b8f-a8b7-12f0b7296801)\"" pod="openstack/watcher-decision-engine-0" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.587644 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hmsfj" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.705604 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5s7p\" (UniqueName: \"kubernetes.io/projected/a410393d-b0c5-45bf-b9f7-897ad16759d4-kube-api-access-n5s7p\") pod \"a410393d-b0c5-45bf-b9f7-897ad16759d4\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.705754 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-config\") pod \"a410393d-b0c5-45bf-b9f7-897ad16759d4\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.705807 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-combined-ca-bundle\") pod \"a410393d-b0c5-45bf-b9f7-897ad16759d4\" (UID: \"a410393d-b0c5-45bf-b9f7-897ad16759d4\") " Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.713860 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a410393d-b0c5-45bf-b9f7-897ad16759d4-kube-api-access-n5s7p" (OuterVolumeSpecName: "kube-api-access-n5s7p") pod "a410393d-b0c5-45bf-b9f7-897ad16759d4" (UID: "a410393d-b0c5-45bf-b9f7-897ad16759d4"). InnerVolumeSpecName "kube-api-access-n5s7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.747440 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-config" (OuterVolumeSpecName: "config") pod "a410393d-b0c5-45bf-b9f7-897ad16759d4" (UID: "a410393d-b0c5-45bf-b9f7-897ad16759d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.759799 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a410393d-b0c5-45bf-b9f7-897ad16759d4" (UID: "a410393d-b0c5-45bf-b9f7-897ad16759d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.811818 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.811854 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5s7p\" (UniqueName: \"kubernetes.io/projected/a410393d-b0c5-45bf-b9f7-897ad16759d4-kube-api-access-n5s7p\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:38 crc kubenswrapper[4881]: I0126 13:00:38.811869 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a410393d-b0c5-45bf-b9f7-897ad16759d4-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.187417 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hmsfj" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.187412 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hmsfj" event={"ID":"a410393d-b0c5-45bf-b9f7-897ad16759d4","Type":"ContainerDied","Data":"50bd311ce112bd808a0845fe5808398be542a8c52d271a140c20a298477d5526"} Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.187851 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50bd311ce112bd808a0845fe5808398be542a8c52d271a140c20a298477d5526" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.490891 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-656959885f-b4rm6"] Jan 26 13:00:39 crc kubenswrapper[4881]: E0126 13:00:39.491730 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a410393d-b0c5-45bf-b9f7-897ad16759d4" containerName="neutron-db-sync" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.491752 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a410393d-b0c5-45bf-b9f7-897ad16759d4" containerName="neutron-db-sync" Jan 26 13:00:39 crc kubenswrapper[4881]: E0126 13:00:39.491768 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580e9345-d40f-4136-a7b3-9d5dc370cea7" containerName="dnsmasq-dns" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.491774 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="580e9345-d40f-4136-a7b3-9d5dc370cea7" containerName="dnsmasq-dns" Jan 26 13:00:39 crc kubenswrapper[4881]: E0126 13:00:39.491783 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerName="barbican-api" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.491788 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerName="barbican-api" Jan 26 13:00:39 crc kubenswrapper[4881]: E0126 13:00:39.491796 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580e9345-d40f-4136-a7b3-9d5dc370cea7" containerName="init" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.491802 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="580e9345-d40f-4136-a7b3-9d5dc370cea7" containerName="init" Jan 26 13:00:39 crc kubenswrapper[4881]: E0126 13:00:39.491823 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerName="barbican-api-log" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.491828 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerName="barbican-api-log" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.491990 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a410393d-b0c5-45bf-b9f7-897ad16759d4" containerName="neutron-db-sync" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.492011 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerName="barbican-api-log" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.492023 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="580e9345-d40f-4136-a7b3-9d5dc370cea7" containerName="dnsmasq-dns" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.492029 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="748ff944-3c37-434b-8ce3-6cff7e627ea0" containerName="barbican-api" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.492917 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.517021 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-656959885f-b4rm6"] Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.578640 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5795fd4b4d-xdxj4"] Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.580237 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.582620 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c9mh4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.582718 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.582729 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.582791 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.590342 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5795fd4b4d-xdxj4"] Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.629370 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-sb\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.629436 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-swift-storage-0\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.629495 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj4xw\" (UniqueName: \"kubernetes.io/projected/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-kube-api-access-pj4xw\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.629544 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-nb\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.629573 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-svc\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.629625 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-config\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.731790 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-sb\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.731860 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-httpd-config\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.731879 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-ovndb-tls-certs\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.731900 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-swift-storage-0\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.731917 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-config\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.731940 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4xw\" (UniqueName: \"kubernetes.io/projected/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-kube-api-access-pj4xw\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.731962 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-combined-ca-bundle\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.731979 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-nb\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.732002 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjdbz\" (UniqueName: \"kubernetes.io/projected/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-kube-api-access-sjdbz\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.732022 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-svc\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.732071 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-config\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.732787 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-sb\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.732871 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-config\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.733446 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-nb\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.733565 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-swift-storage-0\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.733984 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-svc\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.755692 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj4xw\" (UniqueName: \"kubernetes.io/projected/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-kube-api-access-pj4xw\") pod \"dnsmasq-dns-656959885f-b4rm6\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.829057 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.833533 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-httpd-config\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.833605 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-ovndb-tls-certs\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.833638 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-config\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.833681 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-combined-ca-bundle\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.833720 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjdbz\" (UniqueName: \"kubernetes.io/projected/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-kube-api-access-sjdbz\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.838180 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-combined-ca-bundle\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.838346 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-ovndb-tls-certs\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.838496 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-config\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.842223 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-httpd-config\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.850856 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjdbz\" (UniqueName: \"kubernetes.io/projected/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-kube-api-access-sjdbz\") pod \"neutron-5795fd4b4d-xdxj4\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.904052 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:39 crc kubenswrapper[4881]: I0126 13:00:39.951830 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c67646cfd-kppgm" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.157:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8443: connect: connection refused" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.003210 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmggw" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.139183 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-combined-ca-bundle\") pod \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.139259 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-db-sync-config-data\") pod \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.139339 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52rx4\" (UniqueName: \"kubernetes.io/projected/cf634913-5017-4a94-a3e7-0c337bb9fb4d-kube-api-access-52rx4\") pod \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.139366 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-config-data\") pod \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\" (UID: \"cf634913-5017-4a94-a3e7-0c337bb9fb4d\") " Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.148660 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cf634913-5017-4a94-a3e7-0c337bb9fb4d" (UID: "cf634913-5017-4a94-a3e7-0c337bb9fb4d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.171998 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf634913-5017-4a94-a3e7-0c337bb9fb4d-kube-api-access-52rx4" (OuterVolumeSpecName: "kube-api-access-52rx4") pod "cf634913-5017-4a94-a3e7-0c337bb9fb4d" (UID: "cf634913-5017-4a94-a3e7-0c337bb9fb4d"). InnerVolumeSpecName "kube-api-access-52rx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.207702 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf634913-5017-4a94-a3e7-0c337bb9fb4d" (UID: "cf634913-5017-4a94-a3e7-0c337bb9fb4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.236662 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-config-data" (OuterVolumeSpecName: "config-data") pod "cf634913-5017-4a94-a3e7-0c337bb9fb4d" (UID: "cf634913-5017-4a94-a3e7-0c337bb9fb4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.237560 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.237597 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.238146 4881 scope.go:117] "RemoveContainer" containerID="1155fe2ececa788009d295d6b3f0350b265a62f45095f8a599dcda64dc323a0e" Jan 26 13:00:40 crc kubenswrapper[4881]: E0126 13:00:40.238478 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(25d5f80d-b0a4-4b8f-a8b7-12f0b7296801)\"" pod="openstack/watcher-decision-engine-0" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.241545 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.241571 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.241582 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52rx4\" (UniqueName: \"kubernetes.io/projected/cf634913-5017-4a94-a3e7-0c337bb9fb4d-kube-api-access-52rx4\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.241592 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf634913-5017-4a94-a3e7-0c337bb9fb4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.241721 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmggw" event={"ID":"cf634913-5017-4a94-a3e7-0c337bb9fb4d","Type":"ContainerDied","Data":"d60e5c74057cff4ee905dda55d676a653c80700c5387ae3805159402bcc5cd54"} Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.241744 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d60e5c74057cff4ee905dda55d676a653c80700c5387ae3805159402bcc5cd54" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.241802 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmggw" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.257876 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.274896 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.386378 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-656959885f-b4rm6"] Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.736442 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5795fd4b4d-xdxj4"] Jan 26 13:00:40 crc kubenswrapper[4881]: W0126 13:00:40.796651 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4065cb1b_b1ab_4fef_b77f_64ec87d80d99.slice/crio-06e230abe3a16291d2f4ba2e9d202472185a52c2482a9e046df62bcfff1e27f1 WatchSource:0}: Error finding container 06e230abe3a16291d2f4ba2e9d202472185a52c2482a9e046df62bcfff1e27f1: Status 404 returned error can't find the container with id 06e230abe3a16291d2f4ba2e9d202472185a52c2482a9e046df62bcfff1e27f1 Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.850376 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-656959885f-b4rm6"] Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.897580 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-pd9zd"] Jan 26 13:00:40 crc kubenswrapper[4881]: E0126 13:00:40.897992 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf634913-5017-4a94-a3e7-0c337bb9fb4d" containerName="glance-db-sync" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.898008 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf634913-5017-4a94-a3e7-0c337bb9fb4d" containerName="glance-db-sync" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.898240 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf634913-5017-4a94-a3e7-0c337bb9fb4d" containerName="glance-db-sync" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.899240 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:40 crc kubenswrapper[4881]: I0126 13:00:40.924662 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-pd9zd"] Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.068148 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzkh\" (UniqueName: \"kubernetes.io/projected/5790d0bd-1660-4c11-af86-e99d9a2aabf8-kube-api-access-dhzkh\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.068274 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-config\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.068299 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-svc\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.068323 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-sb\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.068439 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-swift-storage-0\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.068651 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-nb\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.170754 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-nb\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.171091 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhzkh\" (UniqueName: \"kubernetes.io/projected/5790d0bd-1660-4c11-af86-e99d9a2aabf8-kube-api-access-dhzkh\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.171169 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-config\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.171196 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-svc\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.171236 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-sb\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.171262 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-swift-storage-0\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.171546 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-nb\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.171935 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-swift-storage-0\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.172683 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-config\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.172705 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-sb\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.174130 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-svc\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.193822 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhzkh\" (UniqueName: \"kubernetes.io/projected/5790d0bd-1660-4c11-af86-e99d9a2aabf8-kube-api-access-dhzkh\") pod \"dnsmasq-dns-5d675956bc-pd9zd\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.258008 4881 generic.go:334] "Generic (PLEG): container finished" podID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" containerID="3e812eee12ffe504c5b25c875dd765f92f74a124f44167e3baf0f9cf97041844" exitCode=0 Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.258146 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dbdf91bd-1981-4d81-bc94-26b6a156aa9e","Type":"ContainerDied","Data":"3e812eee12ffe504c5b25c875dd765f92f74a124f44167e3baf0f9cf97041844"} Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.259534 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5795fd4b4d-xdxj4" event={"ID":"4065cb1b-b1ab-4fef-b77f-64ec87d80d99","Type":"ContainerStarted","Data":"de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed"} Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.259558 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5795fd4b4d-xdxj4" event={"ID":"4065cb1b-b1ab-4fef-b77f-64ec87d80d99","Type":"ContainerStarted","Data":"06e230abe3a16291d2f4ba2e9d202472185a52c2482a9e046df62bcfff1e27f1"} Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.262184 4881 generic.go:334] "Generic (PLEG): container finished" podID="a37d86c8-ba9c-4890-99c9-946e7eb64f6e" containerID="fcbe83a0f2795889f5e33752cbdd1d2d1b5cc282e23f596ecf76aeb7c5fde86a" exitCode=0 Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.263106 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656959885f-b4rm6" event={"ID":"a37d86c8-ba9c-4890-99c9-946e7eb64f6e","Type":"ContainerDied","Data":"fcbe83a0f2795889f5e33752cbdd1d2d1b5cc282e23f596ecf76aeb7c5fde86a"} Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.263134 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656959885f-b4rm6" event={"ID":"a37d86c8-ba9c-4890-99c9-946e7eb64f6e","Type":"ContainerStarted","Data":"9dcf0518b6093427861eeac525b9bea04d217b7ce3795359283f4fe67bfade14"} Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.328281 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.586291 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.590148 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-54766b76bb-mkjc2" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.665571 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.678638 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.678744 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.681912 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.682283 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jjhvj" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.682568 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.753173 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.795527 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.795596 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-logs\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.795622 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.795659 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.795696 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.795713 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.795763 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzf6\" (UniqueName: \"kubernetes.io/projected/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-kube-api-access-2dzf6\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.897256 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-swift-storage-0\") pod \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.897325 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-config\") pod \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.897418 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-sb\") pod \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.897466 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj4xw\" (UniqueName: \"kubernetes.io/projected/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-kube-api-access-pj4xw\") pod \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.897554 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-svc\") pod \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.897574 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-nb\") pod \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\" (UID: \"a37d86c8-ba9c-4890-99c9-946e7eb64f6e\") " Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.897896 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.897930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-logs\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.897952 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.897990 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.898023 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.898038 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.898086 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzf6\" (UniqueName: \"kubernetes.io/projected/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-kube-api-access-2dzf6\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.901754 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-logs\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.902887 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.902972 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.906541 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-kube-api-access-pj4xw" (OuterVolumeSpecName: "kube-api-access-pj4xw") pod "a37d86c8-ba9c-4890-99c9-946e7eb64f6e" (UID: "a37d86c8-ba9c-4890-99c9-946e7eb64f6e"). InnerVolumeSpecName "kube-api-access-pj4xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.909170 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.909706 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.933257 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.945461 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a37d86c8-ba9c-4890-99c9-946e7eb64f6e" (UID: "a37d86c8-ba9c-4890-99c9-946e7eb64f6e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.945547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzf6\" (UniqueName: \"kubernetes.io/projected/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-kube-api-access-2dzf6\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.971221 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a37d86c8-ba9c-4890-99c9-946e7eb64f6e" (UID: "a37d86c8-ba9c-4890-99c9-946e7eb64f6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.971338 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-config" (OuterVolumeSpecName: "config") pod "a37d86c8-ba9c-4890-99c9-946e7eb64f6e" (UID: "a37d86c8-ba9c-4890-99c9-946e7eb64f6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:41 crc kubenswrapper[4881]: I0126 13:00:41.997308 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.001610 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.001644 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.001657 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.001778 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj4xw\" (UniqueName: \"kubernetes.io/projected/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-kube-api-access-pj4xw\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.019123 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a37d86c8-ba9c-4890-99c9-946e7eb64f6e" (UID: "a37d86c8-ba9c-4890-99c9-946e7eb64f6e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.029135 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a37d86c8-ba9c-4890-99c9-946e7eb64f6e" (UID: "a37d86c8-ba9c-4890-99c9-946e7eb64f6e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.105458 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.105835 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a37d86c8-ba9c-4890-99c9-946e7eb64f6e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.111867 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.144097 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:00:42 crc kubenswrapper[4881]: E0126 13:00:42.145458 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" containerName="cinder-scheduler" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.145488 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" containerName="cinder-scheduler" Jan 26 13:00:42 crc kubenswrapper[4881]: E0126 13:00:42.145505 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37d86c8-ba9c-4890-99c9-946e7eb64f6e" containerName="init" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.145522 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37d86c8-ba9c-4890-99c9-946e7eb64f6e" containerName="init" Jan 26 13:00:42 crc kubenswrapper[4881]: E0126 13:00:42.145581 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" containerName="probe" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.145587 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" containerName="probe" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.145806 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37d86c8-ba9c-4890-99c9-946e7eb64f6e" containerName="init" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.145815 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" containerName="probe" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.145843 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" containerName="cinder-scheduler" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.147265 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.154979 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.155279 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-pd9zd"] Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.167748 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.212310 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data-custom\") pod \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.212702 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-scripts\") pod \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.212943 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krbjp\" (UniqueName: \"kubernetes.io/projected/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-kube-api-access-krbjp\") pod \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.213038 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data\") pod \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.213207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-combined-ca-bundle\") pod \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.213352 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-etc-machine-id\") pod \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\" (UID: \"dbdf91bd-1981-4d81-bc94-26b6a156aa9e\") " Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.213877 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dbdf91bd-1981-4d81-bc94-26b6a156aa9e" (UID: "dbdf91bd-1981-4d81-bc94-26b6a156aa9e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.223107 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dbdf91bd-1981-4d81-bc94-26b6a156aa9e" (UID: "dbdf91bd-1981-4d81-bc94-26b6a156aa9e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.224757 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-scripts" (OuterVolumeSpecName: "scripts") pod "dbdf91bd-1981-4d81-bc94-26b6a156aa9e" (UID: "dbdf91bd-1981-4d81-bc94-26b6a156aa9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.225470 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-kube-api-access-krbjp" (OuterVolumeSpecName: "kube-api-access-krbjp") pod "dbdf91bd-1981-4d81-bc94-26b6a156aa9e" (UID: "dbdf91bd-1981-4d81-bc94-26b6a156aa9e"). InnerVolumeSpecName "kube-api-access-krbjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.277944 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5795fd4b4d-xdxj4" event={"ID":"4065cb1b-b1ab-4fef-b77f-64ec87d80d99","Type":"ContainerStarted","Data":"9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f"} Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.278019 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.293037 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656959885f-b4rm6" event={"ID":"a37d86c8-ba9c-4890-99c9-946e7eb64f6e","Type":"ContainerDied","Data":"9dcf0518b6093427861eeac525b9bea04d217b7ce3795359283f4fe67bfade14"} Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.293447 4881 scope.go:117] "RemoveContainer" containerID="fcbe83a0f2795889f5e33752cbdd1d2d1b5cc282e23f596ecf76aeb7c5fde86a" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.293407 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656959885f-b4rm6" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.303899 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" event={"ID":"5790d0bd-1660-4c11-af86-e99d9a2aabf8","Type":"ContainerStarted","Data":"a665acb91d5333c64b9645919b1cb800cd8f838a59ea8cd7c1250fb3b9b8a951"} Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.307794 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5795fd4b4d-xdxj4" podStartSLOduration=3.307773582 podStartE2EDuration="3.307773582s" podCreationTimestamp="2026-01-26 13:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:42.297063774 +0000 UTC m=+1514.776373820" watchObservedRunningTime="2026-01-26 13:00:42.307773582 +0000 UTC m=+1514.787083608" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.308601 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316534 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316569 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-logs\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316589 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316616 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316646 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316663 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316722 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwx9q\" (UniqueName: \"kubernetes.io/projected/e89eb411-736e-4df9-894f-f48d783a4b01-kube-api-access-bwx9q\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316832 4881 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316841 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316849 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.316860 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krbjp\" (UniqueName: \"kubernetes.io/projected/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-kube-api-access-krbjp\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.334951 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.335009 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dbdf91bd-1981-4d81-bc94-26b6a156aa9e","Type":"ContainerDied","Data":"cb4c420c32379feb74b6e2f4313ee43db6f3f43a1e2f8064941e987fe67e868f"} Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.340603 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbdf91bd-1981-4d81-bc94-26b6a156aa9e" (UID: "dbdf91bd-1981-4d81-bc94-26b6a156aa9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.364828 4881 scope.go:117] "RemoveContainer" containerID="1dd952ee3f6093373f8b30c66a97606235cb558489c5869996dcf8f972a918c3" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.380786 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-656959885f-b4rm6"] Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.388152 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-656959885f-b4rm6"] Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.399231 4881 scope.go:117] "RemoveContainer" containerID="3e812eee12ffe504c5b25c875dd765f92f74a124f44167e3baf0f9cf97041844" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.418542 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.418579 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-logs\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.418600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.418649 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.418693 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.418709 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.418927 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwx9q\" (UniqueName: \"kubernetes.io/projected/e89eb411-736e-4df9-894f-f48d783a4b01-kube-api-access-bwx9q\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.419046 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.419373 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.420084 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-logs\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.420321 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.424189 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.424349 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data" (OuterVolumeSpecName: "config-data") pod "dbdf91bd-1981-4d81-bc94-26b6a156aa9e" (UID: "dbdf91bd-1981-4d81-bc94-26b6a156aa9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.425785 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.427974 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.437799 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwx9q\" (UniqueName: \"kubernetes.io/projected/e89eb411-736e-4df9-894f-f48d783a4b01-kube-api-access-bwx9q\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.456757 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.478928 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.520890 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdf91bd-1981-4d81-bc94-26b6a156aa9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.717575 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.745731 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.760888 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.762088 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.765562 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.769853 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.773822 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.779360 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.832297 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-scripts\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.832351 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.832391 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.832433 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-config-data\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.832482 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvg9\" (UniqueName: \"kubernetes.io/projected/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-kube-api-access-fjvg9\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.832674 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.842627 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.934211 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.934286 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-scripts\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.934318 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.934349 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.934384 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-config-data\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.934417 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjvg9\" (UniqueName: \"kubernetes.io/projected/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-kube-api-access-fjvg9\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.934936 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.935989 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.939788 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-scripts\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.940167 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.940304 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-config-data\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.944593 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:42 crc kubenswrapper[4881]: I0126 13:00:42.953474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjvg9\" (UniqueName: \"kubernetes.io/projected/5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c-kube-api-access-fjvg9\") pod \"cinder-scheduler-0\" (UID: \"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c\") " pod="openstack/cinder-scheduler-0" Jan 26 13:00:43 crc kubenswrapper[4881]: I0126 13:00:43.179949 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:00:43 crc kubenswrapper[4881]: I0126 13:00:43.307431 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:00:43 crc kubenswrapper[4881]: I0126 13:00:43.393768 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e89eb411-736e-4df9-894f-f48d783a4b01","Type":"ContainerStarted","Data":"db558dce90a97037aea750564131bead1878f4c18b1630e97eb83d44c0ddafed"} Jan 26 13:00:43 crc kubenswrapper[4881]: I0126 13:00:43.405215 4881 generic.go:334] "Generic (PLEG): container finished" podID="5790d0bd-1660-4c11-af86-e99d9a2aabf8" containerID="bcf1597dc02b74b38075905682f066a714f7ee4660bf6f4ceef8df975de06518" exitCode=0 Jan 26 13:00:43 crc kubenswrapper[4881]: I0126 13:00:43.405399 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" event={"ID":"5790d0bd-1660-4c11-af86-e99d9a2aabf8","Type":"ContainerDied","Data":"bcf1597dc02b74b38075905682f066a714f7ee4660bf6f4ceef8df975de06518"} Jan 26 13:00:43 crc kubenswrapper[4881]: I0126 13:00:43.427361 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c","Type":"ContainerStarted","Data":"fe68baa8b1f79580a2dec294d03d7b10b4a311d864d787c6985c5ea5fd059355"} Jan 26 13:00:43 crc kubenswrapper[4881]: I0126 13:00:43.521933 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:43 crc kubenswrapper[4881]: I0126 13:00:43.577123 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c77fn"] Jan 26 13:00:43 crc kubenswrapper[4881]: W0126 13:00:43.673976 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f3b1d8b_9062_4c02_90a4_f4a7e6c4c17c.slice/crio-ac6acd35fad92f25f958d1b14d278e566527afddab0e9c678834a13200401c30 WatchSource:0}: Error finding container ac6acd35fad92f25f958d1b14d278e566527afddab0e9c678834a13200401c30: Status 404 returned error can't find the container with id ac6acd35fad92f25f958d1b14d278e566527afddab0e9c678834a13200401c30 Jan 26 13:00:43 crc kubenswrapper[4881]: I0126 13:00:43.677795 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:00:43 crc kubenswrapper[4881]: I0126 13:00:43.784375 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-76cf66855-bgjld" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.114996 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37d86c8-ba9c-4890-99c9-946e7eb64f6e" path="/var/lib/kubelet/pods/a37d86c8-ba9c-4890-99c9-946e7eb64f6e/volumes" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.115495 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbdf91bd-1981-4d81-bc94-26b6a156aa9e" path="/var/lib/kubelet/pods/dbdf91bd-1981-4d81-bc94-26b6a156aa9e/volumes" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.453561 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c","Type":"ContainerStarted","Data":"dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821"} Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.455061 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c","Type":"ContainerStarted","Data":"ac6acd35fad92f25f958d1b14d278e566527afddab0e9c678834a13200401c30"} Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.456257 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e89eb411-736e-4df9-894f-f48d783a4b01","Type":"ContainerStarted","Data":"dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1"} Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.466673 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" event={"ID":"5790d0bd-1660-4c11-af86-e99d9a2aabf8","Type":"ContainerStarted","Data":"b912e0916cd54d1360e533a0393cf94b47122f9712df5756b72a472f4e47910a"} Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.466735 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.490816 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" podStartSLOduration=4.490791566 podStartE2EDuration="4.490791566s" podCreationTimestamp="2026-01-26 13:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:44.484034133 +0000 UTC m=+1516.963344159" watchObservedRunningTime="2026-01-26 13:00:44.490791566 +0000 UTC m=+1516.970101592" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.817041 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5555bb9565-2bdtt"] Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.824392 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.831843 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.832032 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.845044 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5555bb9565-2bdtt"] Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.889301 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-internal-tls-certs\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.889397 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-combined-ca-bundle\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.889439 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-config\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.889632 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-public-tls-certs\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.889681 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-ovndb-tls-certs\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.889720 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqv62\" (UniqueName: \"kubernetes.io/projected/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-kube-api-access-wqv62\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.889956 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-httpd-config\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.991444 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-config\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.991550 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-public-tls-certs\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.991567 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-ovndb-tls-certs\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.991587 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqv62\" (UniqueName: \"kubernetes.io/projected/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-kube-api-access-wqv62\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.991698 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-httpd-config\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.991742 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-internal-tls-certs\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:44 crc kubenswrapper[4881]: I0126 13:00:44.991780 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-combined-ca-bundle\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:44.998734 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-public-tls-certs\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.000204 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-ovndb-tls-certs\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.001230 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-config\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.003682 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-combined-ca-bundle\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.006074 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-httpd-config\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.008185 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-internal-tls-certs\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.017258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqv62\" (UniqueName: \"kubernetes.io/projected/666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9-kube-api-access-wqv62\") pod \"neutron-5555bb9565-2bdtt\" (UID: \"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9\") " pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.171277 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.489213 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c","Type":"ContainerStarted","Data":"04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b"} Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.505937 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c","Type":"ContainerStarted","Data":"e34d435df46a72b2578e9216e8694d3af29a9a8c9726261052f2eca86a221de1"} Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.518816 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.518798528 podStartE2EDuration="5.518798528s" podCreationTimestamp="2026-01-26 13:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:45.514631508 +0000 UTC m=+1517.993941534" watchObservedRunningTime="2026-01-26 13:00:45.518798528 +0000 UTC m=+1517.998108554" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.543856 4881 generic.go:334] "Generic (PLEG): container finished" podID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerID="785ba06ccaceb90529fed9d4bfe616ece3f2212b557578bbd30e62ec93faabac" exitCode=137 Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.543913 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67646cfd-kppgm" event={"ID":"6b0cbe35-c0c9-4483-866a-eddf1fdced26","Type":"ContainerDied","Data":"785ba06ccaceb90529fed9d4bfe616ece3f2212b557578bbd30e62ec93faabac"} Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.566782 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.580774 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e89eb411-736e-4df9-894f-f48d783a4b01","Type":"ContainerStarted","Data":"1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89"} Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.580921 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c77fn" podUID="e1de9071-e022-48f1-99cf-7344c70cadad" containerName="registry-server" containerID="cri-o://9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482" gracePeriod=2 Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.659536 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.659505588 podStartE2EDuration="4.659505588s" podCreationTimestamp="2026-01-26 13:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:45.642557699 +0000 UTC m=+1518.121867725" watchObservedRunningTime="2026-01-26 13:00:45.659505588 +0000 UTC m=+1518.138815614" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.679444 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.721096 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b0cbe35-c0c9-4483-866a-eddf1fdced26-logs\") pod \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.721172 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-scripts\") pod \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.721190 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-combined-ca-bundle\") pod \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.721267 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-secret-key\") pod \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.721290 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d87dr\" (UniqueName: \"kubernetes.io/projected/6b0cbe35-c0c9-4483-866a-eddf1fdced26-kube-api-access-d87dr\") pod \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.721392 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-tls-certs\") pod \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.721466 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-config-data\") pod \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\" (UID: \"6b0cbe35-c0c9-4483-866a-eddf1fdced26\") " Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.722838 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b0cbe35-c0c9-4483-866a-eddf1fdced26-logs" (OuterVolumeSpecName: "logs") pod "6b0cbe35-c0c9-4483-866a-eddf1fdced26" (UID: "6b0cbe35-c0c9-4483-866a-eddf1fdced26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.773831 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6b0cbe35-c0c9-4483-866a-eddf1fdced26" (UID: "6b0cbe35-c0c9-4483-866a-eddf1fdced26"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.781569 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b0cbe35-c0c9-4483-866a-eddf1fdced26-kube-api-access-d87dr" (OuterVolumeSpecName: "kube-api-access-d87dr") pod "6b0cbe35-c0c9-4483-866a-eddf1fdced26" (UID: "6b0cbe35-c0c9-4483-866a-eddf1fdced26"). InnerVolumeSpecName "kube-api-access-d87dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.825062 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.825111 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d87dr\" (UniqueName: \"kubernetes.io/projected/6b0cbe35-c0c9-4483-866a-eddf1fdced26-kube-api-access-d87dr\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.825126 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b0cbe35-c0c9-4483-866a-eddf1fdced26-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.827837 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b0cbe35-c0c9-4483-866a-eddf1fdced26" (UID: "6b0cbe35-c0c9-4483-866a-eddf1fdced26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.829279 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-scripts" (OuterVolumeSpecName: "scripts") pod "6b0cbe35-c0c9-4483-866a-eddf1fdced26" (UID: "6b0cbe35-c0c9-4483-866a-eddf1fdced26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.914113 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-config-data" (OuterVolumeSpecName: "config-data") pod "6b0cbe35-c0c9-4483-866a-eddf1fdced26" (UID: "6b0cbe35-c0c9-4483-866a-eddf1fdced26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.928112 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.928183 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b0cbe35-c0c9-4483-866a-eddf1fdced26-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.928197 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.938204 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "6b0cbe35-c0c9-4483-866a-eddf1fdced26" (UID: "6b0cbe35-c0c9-4483-866a-eddf1fdced26"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:45 crc kubenswrapper[4881]: I0126 13:00:45.953870 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.036101 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0cbe35-c0c9-4483-866a-eddf1fdced26-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.044965 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5555bb9565-2bdtt"] Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.161049 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.238401 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-utilities\") pod \"e1de9071-e022-48f1-99cf-7344c70cadad\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.238647 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-catalog-content\") pod \"e1de9071-e022-48f1-99cf-7344c70cadad\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.238899 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csc5b\" (UniqueName: \"kubernetes.io/projected/e1de9071-e022-48f1-99cf-7344c70cadad-kube-api-access-csc5b\") pod \"e1de9071-e022-48f1-99cf-7344c70cadad\" (UID: \"e1de9071-e022-48f1-99cf-7344c70cadad\") " Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.244243 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1de9071-e022-48f1-99cf-7344c70cadad-kube-api-access-csc5b" (OuterVolumeSpecName: "kube-api-access-csc5b") pod "e1de9071-e022-48f1-99cf-7344c70cadad" (UID: "e1de9071-e022-48f1-99cf-7344c70cadad"). InnerVolumeSpecName "kube-api-access-csc5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.247787 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-utilities" (OuterVolumeSpecName: "utilities") pod "e1de9071-e022-48f1-99cf-7344c70cadad" (UID: "e1de9071-e022-48f1-99cf-7344c70cadad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.286687 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1de9071-e022-48f1-99cf-7344c70cadad" (UID: "e1de9071-e022-48f1-99cf-7344c70cadad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.340761 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csc5b\" (UniqueName: \"kubernetes.io/projected/e1de9071-e022-48f1-99cf-7344c70cadad-kube-api-access-csc5b\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.340788 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.340798 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1de9071-e022-48f1-99cf-7344c70cadad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.590657 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5555bb9565-2bdtt" event={"ID":"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9","Type":"ContainerStarted","Data":"3e3d35efe844c6b8c8943a07931e359ccf75387df17ed28bf99250622e21cd86"} Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.590698 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5555bb9565-2bdtt" event={"ID":"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9","Type":"ContainerStarted","Data":"ba000ee3016af783fd7553d0857bc554c36a28bd0911a43adcdf5d540db9db55"} Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.590709 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5555bb9565-2bdtt" event={"ID":"666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9","Type":"ContainerStarted","Data":"8d2ea498d7f9745c38ca0ca065cff5a90bf7602620a9d49696876120d8ed6c1e"} Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.591970 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.597154 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c","Type":"ContainerStarted","Data":"9fd3971d7f26f78b2d5ba1f31e46a1eb1bf249c9c77e2bb9d7e12e88bc8e8c8d"} Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.603222 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.607583 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c67646cfd-kppgm" event={"ID":"6b0cbe35-c0c9-4483-866a-eddf1fdced26","Type":"ContainerDied","Data":"963091c9cc5b00a2295a02f06b5dea81bb771527d4a2d572fc6637569b900d59"} Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.607628 4881 scope.go:117] "RemoveContainer" containerID="11becbe66e90b27a1d407833f793333eb538d4d8b813396d1a62681ea0806353" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.607777 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c67646cfd-kppgm" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.620497 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5555bb9565-2bdtt" podStartSLOduration=2.6204743710000002 podStartE2EDuration="2.620474371s" podCreationTimestamp="2026-01-26 13:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:46.617505459 +0000 UTC m=+1519.096815485" watchObservedRunningTime="2026-01-26 13:00:46.620474371 +0000 UTC m=+1519.099784397" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.627754 4881 generic.go:334] "Generic (PLEG): container finished" podID="e1de9071-e022-48f1-99cf-7344c70cadad" containerID="9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482" exitCode=0 Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.628202 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c77fn" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.628531 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c77fn" event={"ID":"e1de9071-e022-48f1-99cf-7344c70cadad","Type":"ContainerDied","Data":"9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482"} Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.628560 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c77fn" event={"ID":"e1de9071-e022-48f1-99cf-7344c70cadad","Type":"ContainerDied","Data":"5476cd11744e65c56029efbb41fa4fd8056f08337cd07db4b8fb661549a65f55"} Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.642176 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.642158964 podStartE2EDuration="4.642158964s" podCreationTimestamp="2026-01-26 13:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:46.63740113 +0000 UTC m=+1519.116711176" watchObservedRunningTime="2026-01-26 13:00:46.642158964 +0000 UTC m=+1519.121468990" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.695567 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c67646cfd-kppgm"] Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.719582 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c67646cfd-kppgm"] Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.735829 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c77fn"] Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.750499 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c77fn"] Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.796907 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 13:00:46 crc kubenswrapper[4881]: E0126 13:00:46.797330 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon-log" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.797342 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon-log" Jan 26 13:00:46 crc kubenswrapper[4881]: E0126 13:00:46.797352 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1de9071-e022-48f1-99cf-7344c70cadad" containerName="extract-content" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.797359 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1de9071-e022-48f1-99cf-7344c70cadad" containerName="extract-content" Jan 26 13:00:46 crc kubenswrapper[4881]: E0126 13:00:46.797377 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.797385 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon" Jan 26 13:00:46 crc kubenswrapper[4881]: E0126 13:00:46.797416 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1de9071-e022-48f1-99cf-7344c70cadad" containerName="extract-utilities" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.797422 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1de9071-e022-48f1-99cf-7344c70cadad" containerName="extract-utilities" Jan 26 13:00:46 crc kubenswrapper[4881]: E0126 13:00:46.797433 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1de9071-e022-48f1-99cf-7344c70cadad" containerName="registry-server" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.797439 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1de9071-e022-48f1-99cf-7344c70cadad" containerName="registry-server" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.797653 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.797683 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1de9071-e022-48f1-99cf-7344c70cadad" containerName="registry-server" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.797697 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" containerName="horizon-log" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.798530 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.805402 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.811955 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.812142 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-b6bzc" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.812258 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.853955 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6bb7934d-1b01-469b-9b72-c601eebbbf98-openstack-config-secret\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.854007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb7934d-1b01-469b-9b72-c601eebbbf98-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.854113 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqd7q\" (UniqueName: \"kubernetes.io/projected/6bb7934d-1b01-469b-9b72-c601eebbbf98-kube-api-access-rqd7q\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.854157 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6bb7934d-1b01-469b-9b72-c601eebbbf98-openstack-config\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.879258 4881 scope.go:117] "RemoveContainer" containerID="785ba06ccaceb90529fed9d4bfe616ece3f2212b557578bbd30e62ec93faabac" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.910650 4881 scope.go:117] "RemoveContainer" containerID="9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.950226 4881 scope.go:117] "RemoveContainer" containerID="692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.955705 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6bb7934d-1b01-469b-9b72-c601eebbbf98-openstack-config\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.955765 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6bb7934d-1b01-469b-9b72-c601eebbbf98-openstack-config-secret\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.955792 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb7934d-1b01-469b-9b72-c601eebbbf98-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.955888 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqd7q\" (UniqueName: \"kubernetes.io/projected/6bb7934d-1b01-469b-9b72-c601eebbbf98-kube-api-access-rqd7q\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.956610 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6bb7934d-1b01-469b-9b72-c601eebbbf98-openstack-config\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.967582 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6bb7934d-1b01-469b-9b72-c601eebbbf98-openstack-config-secret\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.973128 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb7934d-1b01-469b-9b72-c601eebbbf98-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:46 crc kubenswrapper[4881]: I0126 13:00:46.982466 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqd7q\" (UniqueName: \"kubernetes.io/projected/6bb7934d-1b01-469b-9b72-c601eebbbf98-kube-api-access-rqd7q\") pod \"openstackclient\" (UID: \"6bb7934d-1b01-469b-9b72-c601eebbbf98\") " pod="openstack/openstackclient" Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.019759 4881 scope.go:117] "RemoveContainer" containerID="02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b" Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.072646 4881 scope.go:117] "RemoveContainer" containerID="9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482" Jan 26 13:00:47 crc kubenswrapper[4881]: E0126 13:00:47.073859 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482\": container with ID starting with 9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482 not found: ID does not exist" containerID="9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482" Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.073902 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482"} err="failed to get container status \"9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482\": rpc error: code = NotFound desc = could not find container \"9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482\": container with ID starting with 9c17ca9449b1f783bab64e740a64e0e3f1c1b302187d724ef73e6114208b8482 not found: ID does not exist" Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.073926 4881 scope.go:117] "RemoveContainer" containerID="692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64" Jan 26 13:00:47 crc kubenswrapper[4881]: E0126 13:00:47.077323 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64\": container with ID starting with 692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64 not found: ID does not exist" containerID="692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64" Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.077362 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64"} err="failed to get container status \"692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64\": rpc error: code = NotFound desc = could not find container \"692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64\": container with ID starting with 692e378aaf7bb1b096dc5071a6350ff4bf7a8dcf772a13026319ac38bac8ef64 not found: ID does not exist" Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.077377 4881 scope.go:117] "RemoveContainer" containerID="02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b" Jan 26 13:00:47 crc kubenswrapper[4881]: E0126 13:00:47.080453 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b\": container with ID starting with 02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b not found: ID does not exist" containerID="02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b" Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.080479 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b"} err="failed to get container status \"02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b\": rpc error: code = NotFound desc = could not find container \"02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b\": container with ID starting with 02a9d3e1d254e7c226bf19c61ccc61c33113ededba5f6f26179efc1242c9964b not found: ID does not exist" Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.175956 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.639416 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e89eb411-736e-4df9-894f-f48d783a4b01" containerName="glance-log" containerID="cri-o://dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1" gracePeriod=30 Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.639468 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e89eb411-736e-4df9-894f-f48d783a4b01" containerName="glance-httpd" containerID="cri-o://1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89" gracePeriod=30 Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.639477 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" containerName="glance-log" containerID="cri-o://dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821" gracePeriod=30 Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.639546 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" containerName="glance-httpd" containerID="cri-o://04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b" gracePeriod=30 Jan 26 13:00:47 crc kubenswrapper[4881]: I0126 13:00:47.653418 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 13:00:47 crc kubenswrapper[4881]: W0126 13:00:47.671454 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bb7934d_1b01_469b_9b72_c601eebbbf98.slice/crio-5d38d053d1da37ad693d082b3fbbf38f18bc151b0315292f213a02796bb042d4 WatchSource:0}: Error finding container 5d38d053d1da37ad693d082b3fbbf38f18bc151b0315292f213a02796bb042d4: Status 404 returned error can't find the container with id 5d38d053d1da37ad693d082b3fbbf38f18bc151b0315292f213a02796bb042d4 Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.166794 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b0cbe35-c0c9-4483-866a-eddf1fdced26" path="/var/lib/kubelet/pods/6b0cbe35-c0c9-4483-866a-eddf1fdced26/volumes" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.167875 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1de9071-e022-48f1-99cf-7344c70cadad" path="/var/lib/kubelet/pods/e1de9071-e022-48f1-99cf-7344c70cadad/volumes" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.182674 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.280960 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.387833 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-scripts\") pod \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.387894 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-logs\") pod \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.388126 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.388169 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dzf6\" (UniqueName: \"kubernetes.io/projected/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-kube-api-access-2dzf6\") pod \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.388227 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-combined-ca-bundle\") pod \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.388281 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-config-data\") pod \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.388304 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-httpd-run\") pod \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\" (UID: \"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.388553 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-logs" (OuterVolumeSpecName: "logs") pod "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" (UID: "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.388722 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.391533 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" (UID: "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.393753 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-scripts" (OuterVolumeSpecName: "scripts") pod "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" (UID: "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.395278 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-kube-api-access-2dzf6" (OuterVolumeSpecName: "kube-api-access-2dzf6") pod "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" (UID: "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c"). InnerVolumeSpecName "kube-api-access-2dzf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.395649 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" (UID: "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.415457 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" (UID: "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.458371 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.462698 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-config-data" (OuterVolumeSpecName: "config-data") pod "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" (UID: "26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495037 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-config-data\") pod \"e89eb411-736e-4df9-894f-f48d783a4b01\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495083 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-combined-ca-bundle\") pod \"e89eb411-736e-4df9-894f-f48d783a4b01\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495200 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-logs\") pod \"e89eb411-736e-4df9-894f-f48d783a4b01\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495227 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwx9q\" (UniqueName: \"kubernetes.io/projected/e89eb411-736e-4df9-894f-f48d783a4b01-kube-api-access-bwx9q\") pod \"e89eb411-736e-4df9-894f-f48d783a4b01\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495269 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-scripts\") pod \"e89eb411-736e-4df9-894f-f48d783a4b01\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495287 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"e89eb411-736e-4df9-894f-f48d783a4b01\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495332 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-httpd-run\") pod \"e89eb411-736e-4df9-894f-f48d783a4b01\" (UID: \"e89eb411-736e-4df9-894f-f48d783a4b01\") " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495715 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495731 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495740 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495750 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495766 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.495776 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dzf6\" (UniqueName: \"kubernetes.io/projected/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c-kube-api-access-2dzf6\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.499198 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-logs" (OuterVolumeSpecName: "logs") pod "e89eb411-736e-4df9-894f-f48d783a4b01" (UID: "e89eb411-736e-4df9-894f-f48d783a4b01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.506135 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89eb411-736e-4df9-894f-f48d783a4b01-kube-api-access-bwx9q" (OuterVolumeSpecName: "kube-api-access-bwx9q") pod "e89eb411-736e-4df9-894f-f48d783a4b01" (UID: "e89eb411-736e-4df9-894f-f48d783a4b01"). InnerVolumeSpecName "kube-api-access-bwx9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.506864 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e89eb411-736e-4df9-894f-f48d783a4b01" (UID: "e89eb411-736e-4df9-894f-f48d783a4b01"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.508655 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "e89eb411-736e-4df9-894f-f48d783a4b01" (UID: "e89eb411-736e-4df9-894f-f48d783a4b01"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.524679 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-scripts" (OuterVolumeSpecName: "scripts") pod "e89eb411-736e-4df9-894f-f48d783a4b01" (UID: "e89eb411-736e-4df9-894f-f48d783a4b01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.531646 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e89eb411-736e-4df9-894f-f48d783a4b01" (UID: "e89eb411-736e-4df9-894f-f48d783a4b01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.548143 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.565256 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-config-data" (OuterVolumeSpecName: "config-data") pod "e89eb411-736e-4df9-894f-f48d783a4b01" (UID: "e89eb411-736e-4df9-894f-f48d783a4b01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.597075 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.597110 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.597119 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.597128 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.597139 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e89eb411-736e-4df9-894f-f48d783a4b01-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.597147 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwx9q\" (UniqueName: \"kubernetes.io/projected/e89eb411-736e-4df9-894f-f48d783a4b01-kube-api-access-bwx9q\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.597156 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89eb411-736e-4df9-894f-f48d783a4b01-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.597186 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.616247 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.660305 4881 generic.go:334] "Generic (PLEG): container finished" podID="e89eb411-736e-4df9-894f-f48d783a4b01" containerID="1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89" exitCode=0 Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.661750 4881 generic.go:334] "Generic (PLEG): container finished" podID="e89eb411-736e-4df9-894f-f48d783a4b01" containerID="dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1" exitCode=143 Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.660549 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e89eb411-736e-4df9-894f-f48d783a4b01","Type":"ContainerDied","Data":"1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89"} Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.661945 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e89eb411-736e-4df9-894f-f48d783a4b01","Type":"ContainerDied","Data":"dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1"} Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.662046 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e89eb411-736e-4df9-894f-f48d783a4b01","Type":"ContainerDied","Data":"db558dce90a97037aea750564131bead1878f4c18b1630e97eb83d44c0ddafed"} Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.662139 4881 scope.go:117] "RemoveContainer" containerID="1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.660582 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.667067 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6bb7934d-1b01-469b-9b72-c601eebbbf98","Type":"ContainerStarted","Data":"5d38d053d1da37ad693d082b3fbbf38f18bc151b0315292f213a02796bb042d4"} Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.672717 4881 generic.go:334] "Generic (PLEG): container finished" podID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" containerID="04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b" exitCode=0 Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.672747 4881 generic.go:334] "Generic (PLEG): container finished" podID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" containerID="dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821" exitCode=143 Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.672959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c","Type":"ContainerDied","Data":"04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b"} Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.672995 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c","Type":"ContainerDied","Data":"dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821"} Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.673007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c","Type":"ContainerDied","Data":"fe68baa8b1f79580a2dec294d03d7b10b4a311d864d787c6985c5ea5fd059355"} Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.673037 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.690121 4881 scope.go:117] "RemoveContainer" containerID="dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.700702 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.721909 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.726385 4881 scope.go:117] "RemoveContainer" containerID="1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89" Jan 26 13:00:48 crc kubenswrapper[4881]: E0126 13:00:48.726983 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89\": container with ID starting with 1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89 not found: ID does not exist" containerID="1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.727012 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89"} err="failed to get container status \"1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89\": rpc error: code = NotFound desc = could not find container \"1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89\": container with ID starting with 1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89 not found: ID does not exist" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.727032 4881 scope.go:117] "RemoveContainer" containerID="dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1" Jan 26 13:00:48 crc kubenswrapper[4881]: E0126 13:00:48.727921 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1\": container with ID starting with dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1 not found: ID does not exist" containerID="dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.727953 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1"} err="failed to get container status \"dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1\": rpc error: code = NotFound desc = could not find container \"dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1\": container with ID starting with dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1 not found: ID does not exist" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.727970 4881 scope.go:117] "RemoveContainer" containerID="1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.730946 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89"} err="failed to get container status \"1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89\": rpc error: code = NotFound desc = could not find container \"1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89\": container with ID starting with 1b2bbdb2756ba3441210aeb247ded3f3ae48097b5eaf9c2d51393e4020167e89 not found: ID does not exist" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.730975 4881 scope.go:117] "RemoveContainer" containerID="dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.731236 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1"} err="failed to get container status \"dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1\": rpc error: code = NotFound desc = could not find container \"dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1\": container with ID starting with dac8ad7b28749d63cbc442434795e507a38f81d9942e1a254fef01e4d29258e1 not found: ID does not exist" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.731254 4881 scope.go:117] "RemoveContainer" containerID="04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.733594 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.745827 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:00:48 crc kubenswrapper[4881]: E0126 13:00:48.749162 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" containerName="glance-log" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.749559 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" containerName="glance-log" Jan 26 13:00:48 crc kubenswrapper[4881]: E0126 13:00:48.749684 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" containerName="glance-httpd" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.749701 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" containerName="glance-httpd" Jan 26 13:00:48 crc kubenswrapper[4881]: E0126 13:00:48.749716 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89eb411-736e-4df9-894f-f48d783a4b01" containerName="glance-httpd" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.749723 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89eb411-736e-4df9-894f-f48d783a4b01" containerName="glance-httpd" Jan 26 13:00:48 crc kubenswrapper[4881]: E0126 13:00:48.749732 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89eb411-736e-4df9-894f-f48d783a4b01" containerName="glance-log" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.749738 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89eb411-736e-4df9-894f-f48d783a4b01" containerName="glance-log" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.749978 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" containerName="glance-log" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.749994 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" containerName="glance-httpd" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.750004 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89eb411-736e-4df9-894f-f48d783a4b01" containerName="glance-httpd" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.750019 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89eb411-736e-4df9-894f-f48d783a4b01" containerName="glance-log" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.751218 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.770837 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.770886 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.771346 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.771705 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.771391 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jjhvj" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.771949 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.778371 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.786471 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.789641 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.801623 4881 scope.go:117] "RemoveContainer" containerID="dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.801770 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.801974 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.802091 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-logs\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.802129 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.802196 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.802258 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.802294 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.802322 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.802425 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7q9\" (UniqueName: \"kubernetes.io/projected/52c77e6a-c529-4730-b519-66fb42f88ae8-kube-api-access-rg7q9\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.802604 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.802678 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.907967 4881 scope.go:117] "RemoveContainer" containerID="04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b" Jan 26 13:00:48 crc kubenswrapper[4881]: E0126 13:00:48.908319 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b\": container with ID starting with 04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b not found: ID does not exist" containerID="04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.908348 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b"} err="failed to get container status \"04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b\": rpc error: code = NotFound desc = could not find container \"04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b\": container with ID starting with 04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b not found: ID does not exist" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.908369 4881 scope.go:117] "RemoveContainer" containerID="dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.908637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.908678 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.908748 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.908771 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg7q9\" (UniqueName: \"kubernetes.io/projected/52c77e6a-c529-4730-b519-66fb42f88ae8-kube-api-access-rg7q9\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.908820 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.908852 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.908877 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.908907 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-logs\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.909325 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-logs\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.910209 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: E0126 13:00:48.911958 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821\": container with ID starting with dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821 not found: ID does not exist" containerID="dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.912003 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821"} err="failed to get container status \"dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821\": rpc error: code = NotFound desc = could not find container \"dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821\": container with ID starting with dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821 not found: ID does not exist" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.912029 4881 scope.go:117] "RemoveContainer" containerID="04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.913294 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b"} err="failed to get container status \"04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b\": rpc error: code = NotFound desc = could not find container \"04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b\": container with ID starting with 04c3a33ebc901c55ecfc7ad7bea6a54a659db2ff61a811fbd3680ec2aa14630b not found: ID does not exist" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.913330 4881 scope.go:117] "RemoveContainer" containerID="dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.913859 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.914906 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.914899 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821"} err="failed to get container status \"dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821\": rpc error: code = NotFound desc = could not find container \"dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821\": container with ID starting with dd38b7ba833843e917b804d2926515c6066fc6ea36f56f5a7c7b6527cbdeb821 not found: ID does not exist" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.915345 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.917142 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.920489 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.928941 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg7q9\" (UniqueName: \"kubernetes.io/projected/52c77e6a-c529-4730-b519-66fb42f88ae8-kube-api-access-rg7q9\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:48 crc kubenswrapper[4881]: I0126 13:00:48.956116 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.011150 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.011209 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.011241 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.011279 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.011334 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.011380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-logs\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.011395 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppb9k\" (UniqueName: \"kubernetes.io/projected/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-kube-api-access-ppb9k\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.011427 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.112569 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-logs\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.112612 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppb9k\" (UniqueName: \"kubernetes.io/projected/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-kube-api-access-ppb9k\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.112653 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.112739 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.112763 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.112779 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.112806 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.112841 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.112994 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-logs\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.113102 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.113226 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.118179 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.122576 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.125060 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.125448 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.134185 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppb9k\" (UniqueName: \"kubernetes.io/projected/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-kube-api-access-ppb9k\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.145687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.159954 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.176931 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.669250 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.670425 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api" containerID="cri-o://fab7c291153a954de871a493b4c43864b62ad49bb5cf5a924a01b96e9f16c4a3" gracePeriod=30 Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.671612 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api-log" containerID="cri-o://90918704e3a8ac596751ec6087be0c5e3cdd9f52ee6f366afb89c77c1e4cc088" gracePeriod=30 Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.811881 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:00:49 crc kubenswrapper[4881]: W0126 13:00:49.824877 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52c77e6a_c529_4730_b519_66fb42f88ae8.slice/crio-40e680dd312d82ab8d1dfdaf4c94b36b012d9884f00abc97f1cc532d1b1efad5 WatchSource:0}: Error finding container 40e680dd312d82ab8d1dfdaf4c94b36b012d9884f00abc97f1cc532d1b1efad5: Status 404 returned error can't find the container with id 40e680dd312d82ab8d1dfdaf4c94b36b012d9884f00abc97f1cc532d1b1efad5 Jan 26 13:00:49 crc kubenswrapper[4881]: I0126 13:00:49.892794 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:00:49 crc kubenswrapper[4881]: W0126 13:00:49.905558 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0866d3c_259b_4ac0_a8e3_79d9da065d7b.slice/crio-1fae2456b29c42506310e0e66e9991fac9ccce2ab6840a5264807124e8c4e3f3 WatchSource:0}: Error finding container 1fae2456b29c42506310e0e66e9991fac9ccce2ab6840a5264807124e8c4e3f3: Status 404 returned error can't find the container with id 1fae2456b29c42506310e0e66e9991fac9ccce2ab6840a5264807124e8c4e3f3 Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.097083 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c" path="/var/lib/kubelet/pods/26a462c7-dad6-4b3a-8cb2-30db4f6e6c7c/volumes" Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.098477 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89eb411-736e-4df9-894f-f48d783a4b01" path="/var/lib/kubelet/pods/e89eb411-736e-4df9-894f-f48d783a4b01/volumes" Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.233997 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.234041 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.234760 4881 scope.go:117] "RemoveContainer" containerID="1155fe2ececa788009d295d6b3f0350b265a62f45095f8a599dcda64dc323a0e" Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.543706 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9322/\": read tcp 10.217.0.2:56118->10.217.0.171:9322: read: connection reset by peer" Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.543718 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.171:9322/\": read tcp 10.217.0.2:56104->10.217.0.171:9322: read: connection reset by peer" Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.733308 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52c77e6a-c529-4730-b519-66fb42f88ae8","Type":"ContainerStarted","Data":"be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5"} Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.733600 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52c77e6a-c529-4730-b519-66fb42f88ae8","Type":"ContainerStarted","Data":"40e680dd312d82ab8d1dfdaf4c94b36b012d9884f00abc97f1cc532d1b1efad5"} Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.740601 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801","Type":"ContainerStarted","Data":"ddc6a54278ec576b876902cfe5b2c98d0a53ca4c5bf2735e778be9809870e4df"} Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.747176 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c0866d3c-259b-4ac0-a8e3-79d9da065d7b","Type":"ContainerStarted","Data":"e83d5a9cb8bdfbd6e365b8ea46f86ac0834611212f168b77f75938bb384bd2b0"} Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.747229 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c0866d3c-259b-4ac0-a8e3-79d9da065d7b","Type":"ContainerStarted","Data":"1fae2456b29c42506310e0e66e9991fac9ccce2ab6840a5264807124e8c4e3f3"} Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.750154 4881 generic.go:334] "Generic (PLEG): container finished" podID="94aabe79-a699-4980-a344-43c629e34627" containerID="fab7c291153a954de871a493b4c43864b62ad49bb5cf5a924a01b96e9f16c4a3" exitCode=0 Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.750190 4881 generic.go:334] "Generic (PLEG): container finished" podID="94aabe79-a699-4980-a344-43c629e34627" containerID="90918704e3a8ac596751ec6087be0c5e3cdd9f52ee6f366afb89c77c1e4cc088" exitCode=143 Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.750205 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"94aabe79-a699-4980-a344-43c629e34627","Type":"ContainerDied","Data":"fab7c291153a954de871a493b4c43864b62ad49bb5cf5a924a01b96e9f16c4a3"} Jan 26 13:00:50 crc kubenswrapper[4881]: I0126 13:00:50.750224 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"94aabe79-a699-4980-a344-43c629e34627","Type":"ContainerDied","Data":"90918704e3a8ac596751ec6087be0c5e3cdd9f52ee6f366afb89c77c1e4cc088"} Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.018093 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.157751 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59wsd\" (UniqueName: \"kubernetes.io/projected/94aabe79-a699-4980-a344-43c629e34627-kube-api-access-59wsd\") pod \"94aabe79-a699-4980-a344-43c629e34627\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.157859 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-custom-prometheus-ca\") pod \"94aabe79-a699-4980-a344-43c629e34627\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.157930 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-combined-ca-bundle\") pod \"94aabe79-a699-4980-a344-43c629e34627\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.157994 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-config-data\") pod \"94aabe79-a699-4980-a344-43c629e34627\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.158066 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94aabe79-a699-4980-a344-43c629e34627-logs\") pod \"94aabe79-a699-4980-a344-43c629e34627\" (UID: \"94aabe79-a699-4980-a344-43c629e34627\") " Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.168943 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94aabe79-a699-4980-a344-43c629e34627-logs" (OuterVolumeSpecName: "logs") pod "94aabe79-a699-4980-a344-43c629e34627" (UID: "94aabe79-a699-4980-a344-43c629e34627"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.175872 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94aabe79-a699-4980-a344-43c629e34627-kube-api-access-59wsd" (OuterVolumeSpecName: "kube-api-access-59wsd") pod "94aabe79-a699-4980-a344-43c629e34627" (UID: "94aabe79-a699-4980-a344-43c629e34627"). InnerVolumeSpecName "kube-api-access-59wsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.194405 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94aabe79-a699-4980-a344-43c629e34627" (UID: "94aabe79-a699-4980-a344-43c629e34627"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.234065 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "94aabe79-a699-4980-a344-43c629e34627" (UID: "94aabe79-a699-4980-a344-43c629e34627"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.243723 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-config-data" (OuterVolumeSpecName: "config-data") pod "94aabe79-a699-4980-a344-43c629e34627" (UID: "94aabe79-a699-4980-a344-43c629e34627"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.265495 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59wsd\" (UniqueName: \"kubernetes.io/projected/94aabe79-a699-4980-a344-43c629e34627-kube-api-access-59wsd\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.265538 4881 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.265548 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.265558 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94aabe79-a699-4980-a344-43c629e34627-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.265567 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94aabe79-a699-4980-a344-43c629e34627-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.332319 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.427743 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-5pqnx"] Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.428541 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" podUID="0172226c-65c1-4419-a039-aa7a84642c0e" containerName="dnsmasq-dns" containerID="cri-o://684049ef9e9657561d9cc87f14ca15f1202193eec126e530c95736d286039007" gracePeriod=10 Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.765000 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"94aabe79-a699-4980-a344-43c629e34627","Type":"ContainerDied","Data":"3066037a5c9feb20b8b56a184c12c2fb962dfbba6fe3cb0e754e8b429e459a57"} Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.765264 4881 scope.go:117] "RemoveContainer" containerID="fab7c291153a954de871a493b4c43864b62ad49bb5cf5a924a01b96e9f16c4a3" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.765377 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.790872 4881 generic.go:334] "Generic (PLEG): container finished" podID="0172226c-65c1-4419-a039-aa7a84642c0e" containerID="684049ef9e9657561d9cc87f14ca15f1202193eec126e530c95736d286039007" exitCode=0 Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.790947 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" event={"ID":"0172226c-65c1-4419-a039-aa7a84642c0e","Type":"ContainerDied","Data":"684049ef9e9657561d9cc87f14ca15f1202193eec126e530c95736d286039007"} Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.819461 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52c77e6a-c529-4730-b519-66fb42f88ae8","Type":"ContainerStarted","Data":"b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958"} Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.841651 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c0866d3c-259b-4ac0-a8e3-79d9da065d7b","Type":"ContainerStarted","Data":"26da4adceccb321e69d4fa0372331e812903bba7f08ebf3026b578a05be23636"} Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.869575 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.880681 4881 scope.go:117] "RemoveContainer" containerID="90918704e3a8ac596751ec6087be0c5e3cdd9f52ee6f366afb89c77c1e4cc088" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.889253 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.897761 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:00:51 crc kubenswrapper[4881]: E0126 13:00:51.898107 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api-log" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.898123 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api-log" Jan 26 13:00:51 crc kubenswrapper[4881]: E0126 13:00:51.898138 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.898144 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.898335 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api-log" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.898358 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="94aabe79-a699-4980-a344-43c629e34627" containerName="watcher-api" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.899288 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.901586 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.9015782 podStartE2EDuration="3.9015782s" podCreationTimestamp="2026-01-26 13:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:51.862381573 +0000 UTC m=+1524.341691599" watchObservedRunningTime="2026-01-26 13:00:51.9015782 +0000 UTC m=+1524.380888226" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.902346 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.902594 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.903015 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.914420 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:00:51 crc kubenswrapper[4881]: I0126 13:00:51.935234 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.935212604 podStartE2EDuration="3.935212604s" podCreationTimestamp="2026-01-26 13:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:51.885775098 +0000 UTC m=+1524.365085124" watchObservedRunningTime="2026-01-26 13:00:51.935212604 +0000 UTC m=+1524.414522640" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:51.998679 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:51.999374 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-public-tls-certs\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:51.999439 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-logs\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:51.999484 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-config-data\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.003182 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.003398 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c46qz\" (UniqueName: \"kubernetes.io/projected/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-kube-api-access-c46qz\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.003425 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.106739 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-public-tls-certs\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.106790 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-logs\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.106838 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-config-data\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.106879 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.106947 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c46qz\" (UniqueName: \"kubernetes.io/projected/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-kube-api-access-c46qz\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.106969 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.107093 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.107408 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-logs\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.145190 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-config-data\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.145487 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.147727 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-public-tls-certs\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.153787 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.169407 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c46qz\" (UniqueName: \"kubernetes.io/projected/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-kube-api-access-c46qz\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.169820 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bacf9a45-b73a-41bd-9c12-eb112ddcfaf2-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2\") " pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.176706 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94aabe79-a699-4980-a344-43c629e34627" path="/var/lib/kubelet/pods/94aabe79-a699-4980-a344-43c629e34627/volumes" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.256939 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.263410 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.321113 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-sb\") pod \"0172226c-65c1-4419-a039-aa7a84642c0e\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.321191 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-swift-storage-0\") pod \"0172226c-65c1-4419-a039-aa7a84642c0e\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.321218 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-config\") pod \"0172226c-65c1-4419-a039-aa7a84642c0e\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.321281 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-nb\") pod \"0172226c-65c1-4419-a039-aa7a84642c0e\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.321304 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-svc\") pod \"0172226c-65c1-4419-a039-aa7a84642c0e\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.321400 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhhq9\" (UniqueName: \"kubernetes.io/projected/0172226c-65c1-4419-a039-aa7a84642c0e-kube-api-access-vhhq9\") pod \"0172226c-65c1-4419-a039-aa7a84642c0e\" (UID: \"0172226c-65c1-4419-a039-aa7a84642c0e\") " Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.349825 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0172226c-65c1-4419-a039-aa7a84642c0e-kube-api-access-vhhq9" (OuterVolumeSpecName: "kube-api-access-vhhq9") pod "0172226c-65c1-4419-a039-aa7a84642c0e" (UID: "0172226c-65c1-4419-a039-aa7a84642c0e"). InnerVolumeSpecName "kube-api-access-vhhq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.388913 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0172226c-65c1-4419-a039-aa7a84642c0e" (UID: "0172226c-65c1-4419-a039-aa7a84642c0e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.393379 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0172226c-65c1-4419-a039-aa7a84642c0e" (UID: "0172226c-65c1-4419-a039-aa7a84642c0e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.411351 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0172226c-65c1-4419-a039-aa7a84642c0e" (UID: "0172226c-65c1-4419-a039-aa7a84642c0e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.419366 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-config" (OuterVolumeSpecName: "config") pod "0172226c-65c1-4419-a039-aa7a84642c0e" (UID: "0172226c-65c1-4419-a039-aa7a84642c0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.423731 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0172226c-65c1-4419-a039-aa7a84642c0e" (UID: "0172226c-65c1-4419-a039-aa7a84642c0e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.423769 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.423791 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhhq9\" (UniqueName: \"kubernetes.io/projected/0172226c-65c1-4419-a039-aa7a84642c0e-kube-api-access-vhhq9\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.423802 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.423811 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.423818 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.525020 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0172226c-65c1-4419-a039-aa7a84642c0e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.778671 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:00:52 crc kubenswrapper[4881]: W0126 13:00:52.790926 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbacf9a45_b73a_41bd_9c12_eb112ddcfaf2.slice/crio-f76ef9a530512c1f590d78643ce1cc22823c76de638fc58c36a0677f05edb90a WatchSource:0}: Error finding container f76ef9a530512c1f590d78643ce1cc22823c76de638fc58c36a0677f05edb90a: Status 404 returned error can't find the container with id f76ef9a530512c1f590d78643ce1cc22823c76de638fc58c36a0677f05edb90a Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.863246 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2","Type":"ContainerStarted","Data":"f76ef9a530512c1f590d78643ce1cc22823c76de638fc58c36a0677f05edb90a"} Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.865309 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" event={"ID":"0172226c-65c1-4419-a039-aa7a84642c0e","Type":"ContainerDied","Data":"dac955abb9be3fa1e11a2b83cc7991fac0fe6f9197065aa06d2f05634b262dd7"} Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.865325 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ddf7d98c-5pqnx" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.865396 4881 scope.go:117] "RemoveContainer" containerID="684049ef9e9657561d9cc87f14ca15f1202193eec126e530c95736d286039007" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.915230 4881 scope.go:117] "RemoveContainer" containerID="4d8d10a4f5c17d2d83795b434692751057de75a9911ddf19e2aafebd038d1929" Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.920933 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-5pqnx"] Jan 26 13:00:52 crc kubenswrapper[4881]: I0126 13:00:52.931485 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-5pqnx"] Jan 26 13:00:53 crc kubenswrapper[4881]: I0126 13:00:53.376052 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 13:00:53 crc kubenswrapper[4881]: I0126 13:00:53.880137 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2","Type":"ContainerStarted","Data":"63302237eee7c523bfc683e521e2efe8b1ee422a3680c1355373f7d3ed11faf2"} Jan 26 13:00:53 crc kubenswrapper[4881]: I0126 13:00:53.880179 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bacf9a45-b73a-41bd-9c12-eb112ddcfaf2","Type":"ContainerStarted","Data":"2e9be3b82c19a7a4ed106f0966fd1c3dd30eb3e17f94fbe681fa014f97c2267b"} Jan 26 13:00:53 crc kubenswrapper[4881]: I0126 13:00:53.880933 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 13:00:53 crc kubenswrapper[4881]: I0126 13:00:53.905612 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.905591028 podStartE2EDuration="2.905591028s" podCreationTimestamp="2026-01-26 13:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:00:53.905480216 +0000 UTC m=+1526.384790232" watchObservedRunningTime="2026-01-26 13:00:53.905591028 +0000 UTC m=+1526.384901054" Jan 26 13:00:54 crc kubenswrapper[4881]: I0126 13:00:54.091853 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0172226c-65c1-4419-a039-aa7a84642c0e" path="/var/lib/kubelet/pods/0172226c-65c1-4419-a039-aa7a84642c0e/volumes" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.503406 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-55f986558f-qqwqs"] Jan 26 13:00:55 crc kubenswrapper[4881]: E0126 13:00:55.504011 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0172226c-65c1-4419-a039-aa7a84642c0e" containerName="dnsmasq-dns" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.504023 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0172226c-65c1-4419-a039-aa7a84642c0e" containerName="dnsmasq-dns" Jan 26 13:00:55 crc kubenswrapper[4881]: E0126 13:00:55.504045 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0172226c-65c1-4419-a039-aa7a84642c0e" containerName="init" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.504051 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0172226c-65c1-4419-a039-aa7a84642c0e" containerName="init" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.504209 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0172226c-65c1-4419-a039-aa7a84642c0e" containerName="dnsmasq-dns" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.505170 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.506981 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.507376 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.507852 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.538224 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55f986558f-qqwqs"] Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.586986 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-run-httpd\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.587068 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jxh\" (UniqueName: \"kubernetes.io/projected/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-kube-api-access-79jxh\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.587100 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-public-tls-certs\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.587129 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-internal-tls-certs\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.587147 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-etc-swift\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.587162 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-combined-ca-bundle\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.587200 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-config-data\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.587232 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-log-httpd\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.687895 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-run-httpd\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.687965 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79jxh\" (UniqueName: \"kubernetes.io/projected/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-kube-api-access-79jxh\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.687997 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-public-tls-certs\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.688025 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-internal-tls-certs\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.688042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-etc-swift\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.688058 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-combined-ca-bundle\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.688096 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-config-data\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.688137 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-log-httpd\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.688669 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-log-httpd\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.688969 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-run-httpd\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.694885 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-etc-swift\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.699097 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-combined-ca-bundle\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.703707 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-public-tls-certs\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.704192 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-internal-tls-certs\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.704417 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-config-data\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.706464 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79jxh\" (UniqueName: \"kubernetes.io/projected/4b3ea251-a4e4-4e4d-a21f-a239f80690e1-kube-api-access-79jxh\") pod \"swift-proxy-55f986558f-qqwqs\" (UID: \"4b3ea251-a4e4-4e4d-a21f-a239f80690e1\") " pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:55 crc kubenswrapper[4881]: I0126 13:00:55.827263 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:00:56 crc kubenswrapper[4881]: I0126 13:00:56.136186 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 13:00:57 crc kubenswrapper[4881]: I0126 13:00:57.257723 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.007346 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.008792 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="sg-core" containerID="cri-o://7509c95cfa9b8ca9bd4786703a0355719d70ece332775a0618983991eeed60e1" gracePeriod=30 Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.008854 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="proxy-httpd" containerID="cri-o://d767779ab1ddd53cb13094f375fd7ec10e5420bbb2964d6e583e6f43fe75376c" gracePeriod=30 Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.008891 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="ceilometer-notification-agent" containerID="cri-o://5f8c2e81baaa9f02854a6302bb6e297e08854046e0345c962084c5f72bd64761" gracePeriod=30 Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.009033 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="ceilometer-central-agent" containerID="cri-o://889dd9c30f4e4df926bff6bed7dbf1423d28ea1cac1506bfb53404e078ed03e9" gracePeriod=30 Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.017958 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.176:3000/\": EOF" Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.856407 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.176:3000/\": dial tcp 10.217.0.176:3000: connect: connection refused" Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.939680 4881 generic.go:334] "Generic (PLEG): container finished" podID="9df4625b-a11a-4524-890a-48e2307edddb" containerID="d767779ab1ddd53cb13094f375fd7ec10e5420bbb2964d6e583e6f43fe75376c" exitCode=0 Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.939714 4881 generic.go:334] "Generic (PLEG): container finished" podID="9df4625b-a11a-4524-890a-48e2307edddb" containerID="7509c95cfa9b8ca9bd4786703a0355719d70ece332775a0618983991eeed60e1" exitCode=2 Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.939722 4881 generic.go:334] "Generic (PLEG): container finished" podID="9df4625b-a11a-4524-890a-48e2307edddb" containerID="889dd9c30f4e4df926bff6bed7dbf1423d28ea1cac1506bfb53404e078ed03e9" exitCode=0 Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.939753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df4625b-a11a-4524-890a-48e2307edddb","Type":"ContainerDied","Data":"d767779ab1ddd53cb13094f375fd7ec10e5420bbb2964d6e583e6f43fe75376c"} Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.939780 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df4625b-a11a-4524-890a-48e2307edddb","Type":"ContainerDied","Data":"7509c95cfa9b8ca9bd4786703a0355719d70ece332775a0618983991eeed60e1"} Jan 26 13:00:58 crc kubenswrapper[4881]: I0126 13:00:58.939789 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df4625b-a11a-4524-890a-48e2307edddb","Type":"ContainerDied","Data":"889dd9c30f4e4df926bff6bed7dbf1423d28ea1cac1506bfb53404e078ed03e9"} Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.160918 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.160986 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.178004 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.178069 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.198160 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.220837 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.226266 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.236111 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.960890 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.961674 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.961769 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 13:00:59 crc kubenswrapper[4881]: I0126 13:00:59.961852 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.145391 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490541-c6h96"] Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.146970 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.154946 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490541-c6h96"] Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.186994 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-combined-ca-bundle\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.187167 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-fernet-keys\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.187267 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxb7b\" (UniqueName: \"kubernetes.io/projected/2c2380fd-0233-4942-8e8a-433cc3b15925-kube-api-access-sxb7b\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.187348 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-config-data\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.235725 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.252009 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.291655 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-config-data\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.291855 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-combined-ca-bundle\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.291914 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-fernet-keys\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.291943 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxb7b\" (UniqueName: \"kubernetes.io/projected/2c2380fd-0233-4942-8e8a-433cc3b15925-kube-api-access-sxb7b\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.302499 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-config-data\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.302996 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-combined-ca-bundle\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.312265 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-fernet-keys\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.324076 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxb7b\" (UniqueName: \"kubernetes.io/projected/2c2380fd-0233-4942-8e8a-433cc3b15925-kube-api-access-sxb7b\") pod \"keystone-cron-29490541-c6h96\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.366834 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.490664 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.964777 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:00 crc kubenswrapper[4881]: I0126 13:01:00.997892 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.680080 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.977999 4881 generic.go:334] "Generic (PLEG): container finished" podID="9df4625b-a11a-4524-890a-48e2307edddb" containerID="5f8c2e81baaa9f02854a6302bb6e297e08854046e0345c962084c5f72bd64761" exitCode=0 Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.978711 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.978735 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.978854 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-log" containerID="cri-o://be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5" gracePeriod=30 Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.978936 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df4625b-a11a-4524-890a-48e2307edddb","Type":"ContainerDied","Data":"5f8c2e81baaa9f02854a6302bb6e297e08854046e0345c962084c5f72bd64761"} Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.978954 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.978966 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.979297 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-httpd" containerID="cri-o://b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958" gracePeriod=30 Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.989574 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.187:9292/healthcheck\": EOF" Jan 26 13:01:01 crc kubenswrapper[4881]: I0126 13:01:01.989579 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.187:9292/healthcheck\": EOF" Jan 26 13:01:02 crc kubenswrapper[4881]: I0126 13:01:02.257616 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 26 13:01:02 crc kubenswrapper[4881]: I0126 13:01:02.327507 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 26 13:01:02 crc kubenswrapper[4881]: I0126 13:01:02.605600 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 13:01:02 crc kubenswrapper[4881]: I0126 13:01:02.606753 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 13:01:02 crc kubenswrapper[4881]: I0126 13:01:02.994257 4881 generic.go:334] "Generic (PLEG): container finished" podID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerID="be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5" exitCode=143 Jan 26 13:01:02 crc kubenswrapper[4881]: I0126 13:01:02.994329 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52c77e6a-c529-4730-b519-66fb42f88ae8","Type":"ContainerDied","Data":"be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5"} Jan 26 13:01:02 crc kubenswrapper[4881]: I0126 13:01:02.995411 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" containerName="glance-log" containerID="cri-o://e83d5a9cb8bdfbd6e365b8ea46f86ac0834611212f168b77f75938bb384bd2b0" gracePeriod=30 Jan 26 13:01:02 crc kubenswrapper[4881]: I0126 13:01:02.995716 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" containerName="glance-httpd" containerID="cri-o://26da4adceccb321e69d4fa0372331e812903bba7f08ebf3026b578a05be23636" gracePeriod=30 Jan 26 13:01:03 crc kubenswrapper[4881]: I0126 13:01:03.015365 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 13:01:03 crc kubenswrapper[4881]: I0126 13:01:03.921982 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.012139 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qsf4\" (UniqueName: \"kubernetes.io/projected/9df4625b-a11a-4524-890a-48e2307edddb-kube-api-access-9qsf4\") pod \"9df4625b-a11a-4524-890a-48e2307edddb\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.012173 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-config-data\") pod \"9df4625b-a11a-4524-890a-48e2307edddb\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.012224 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-combined-ca-bundle\") pod \"9df4625b-a11a-4524-890a-48e2307edddb\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.012255 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-scripts\") pod \"9df4625b-a11a-4524-890a-48e2307edddb\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.012271 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-sg-core-conf-yaml\") pod \"9df4625b-a11a-4524-890a-48e2307edddb\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.012317 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-log-httpd\") pod \"9df4625b-a11a-4524-890a-48e2307edddb\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.012340 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-run-httpd\") pod \"9df4625b-a11a-4524-890a-48e2307edddb\" (UID: \"9df4625b-a11a-4524-890a-48e2307edddb\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.014809 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9df4625b-a11a-4524-890a-48e2307edddb" (UID: "9df4625b-a11a-4524-890a-48e2307edddb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.015101 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9df4625b-a11a-4524-890a-48e2307edddb" (UID: "9df4625b-a11a-4524-890a-48e2307edddb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.016016 4881 generic.go:334] "Generic (PLEG): container finished" podID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" containerID="e83d5a9cb8bdfbd6e365b8ea46f86ac0834611212f168b77f75938bb384bd2b0" exitCode=143 Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.016089 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c0866d3c-259b-4ac0-a8e3-79d9da065d7b","Type":"ContainerDied","Data":"e83d5a9cb8bdfbd6e365b8ea46f86ac0834611212f168b77f75938bb384bd2b0"} Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.022009 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df4625b-a11a-4524-890a-48e2307edddb","Type":"ContainerDied","Data":"5599e49064e1b6c4300508468a6d625815bd372ca51aa72e1ec1ee3df0586c44"} Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.022051 4881 scope.go:117] "RemoveContainer" containerID="d767779ab1ddd53cb13094f375fd7ec10e5420bbb2964d6e583e6f43fe75376c" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.022182 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.026823 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df4625b-a11a-4524-890a-48e2307edddb-kube-api-access-9qsf4" (OuterVolumeSpecName: "kube-api-access-9qsf4") pod "9df4625b-a11a-4524-890a-48e2307edddb" (UID: "9df4625b-a11a-4524-890a-48e2307edddb"). InnerVolumeSpecName "kube-api-access-9qsf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.029662 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-scripts" (OuterVolumeSpecName: "scripts") pod "9df4625b-a11a-4524-890a-48e2307edddb" (UID: "9df4625b-a11a-4524-890a-48e2307edddb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.039335 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6bb7934d-1b01-469b-9b72-c601eebbbf98","Type":"ContainerStarted","Data":"c41fbdb1b0b0295414ce0fce405b342ef0d7eedad997e30b396c68529f7acc4a"} Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.068351 4881 scope.go:117] "RemoveContainer" containerID="7509c95cfa9b8ca9bd4786703a0355719d70ece332775a0618983991eeed60e1" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.070002 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9df4625b-a11a-4524-890a-48e2307edddb" (UID: "9df4625b-a11a-4524-890a-48e2307edddb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.074461 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.113812758 podStartE2EDuration="18.074437662s" podCreationTimestamp="2026-01-26 13:00:46 +0000 UTC" firstStartedPulling="2026-01-26 13:00:47.67444089 +0000 UTC m=+1520.153750916" lastFinishedPulling="2026-01-26 13:01:03.635065784 +0000 UTC m=+1536.114375820" observedRunningTime="2026-01-26 13:01:04.054441268 +0000 UTC m=+1536.533751294" watchObservedRunningTime="2026-01-26 13:01:04.074437662 +0000 UTC m=+1536.553747688" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.091431 4881 scope.go:117] "RemoveContainer" containerID="5f8c2e81baaa9f02854a6302bb6e297e08854046e0345c962084c5f72bd64761" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.117364 4881 scope.go:117] "RemoveContainer" containerID="889dd9c30f4e4df926bff6bed7dbf1423d28ea1cac1506bfb53404e078ed03e9" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.117831 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.117859 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df4625b-a11a-4524-890a-48e2307edddb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.117868 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qsf4\" (UniqueName: \"kubernetes.io/projected/9df4625b-a11a-4524-890a-48e2307edddb-kube-api-access-9qsf4\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.117879 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.117887 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.147466 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9df4625b-a11a-4524-890a-48e2307edddb" (UID: "9df4625b-a11a-4524-890a-48e2307edddb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.165162 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.187:9292/healthcheck\": read tcp 10.217.0.2:39236->10.217.0.187:9292: read: connection reset by peer" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.165246 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.187:9292/healthcheck\": read tcp 10.217.0.2:39220->10.217.0.187:9292: read: connection reset by peer" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.182741 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-config-data" (OuterVolumeSpecName: "config-data") pod "9df4625b-a11a-4524-890a-48e2307edddb" (UID: "9df4625b-a11a-4524-890a-48e2307edddb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.219260 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.219699 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df4625b-a11a-4524-890a-48e2307edddb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.231891 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55f986558f-qqwqs"] Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.250840 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490541-c6h96"] Jan 26 13:01:04 crc kubenswrapper[4881]: W0126 13:01:04.268739 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c2380fd_0233_4942_8e8a_433cc3b15925.slice/crio-211dce3a837f318265cc73bd9644da9d9c80463da7fe32d08a275ebb57aa48d8 WatchSource:0}: Error finding container 211dce3a837f318265cc73bd9644da9d9c80463da7fe32d08a275ebb57aa48d8: Status 404 returned error can't find the container with id 211dce3a837f318265cc73bd9644da9d9c80463da7fe32d08a275ebb57aa48d8 Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.402409 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.425704 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.431773 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:04 crc kubenswrapper[4881]: E0126 13:01:04.432287 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="ceilometer-notification-agent" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.432637 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="ceilometer-notification-agent" Jan 26 13:01:04 crc kubenswrapper[4881]: E0126 13:01:04.432702 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="proxy-httpd" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.432752 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="proxy-httpd" Jan 26 13:01:04 crc kubenswrapper[4881]: E0126 13:01:04.432825 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="ceilometer-central-agent" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.432875 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="ceilometer-central-agent" Jan 26 13:01:04 crc kubenswrapper[4881]: E0126 13:01:04.432948 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="sg-core" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.432997 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="sg-core" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.433228 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="ceilometer-notification-agent" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.434787 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="sg-core" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.434811 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="ceilometer-central-agent" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.434826 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df4625b-a11a-4524-890a-48e2307edddb" containerName="proxy-httpd" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.437109 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.439785 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.440979 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.449372 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.528964 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-log-httpd\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.534886 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-config-data\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.534959 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.535064 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxwwg\" (UniqueName: \"kubernetes.io/projected/c41995a8-1c28-4783-849e-868a97831825-kube-api-access-pxwwg\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.535151 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-run-httpd\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.535258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-scripts\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.535331 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.638679 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-config-data\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.639201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.639256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxwwg\" (UniqueName: \"kubernetes.io/projected/c41995a8-1c28-4783-849e-868a97831825-kube-api-access-pxwwg\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.639295 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-run-httpd\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.639333 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-scripts\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.639366 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.639437 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-log-httpd\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.640359 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-log-httpd\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.640728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-run-httpd\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.646295 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-config-data\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.646943 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-scripts\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.651295 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.652863 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.660009 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxwwg\" (UniqueName: \"kubernetes.io/projected/c41995a8-1c28-4783-849e-868a97831825-kube-api-access-pxwwg\") pod \"ceilometer-0\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.785734 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.789029 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.956067 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-config-data\") pod \"52c77e6a-c529-4730-b519-66fb42f88ae8\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.956136 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-logs\") pod \"52c77e6a-c529-4730-b519-66fb42f88ae8\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.956183 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"52c77e6a-c529-4730-b519-66fb42f88ae8\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.956216 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-internal-tls-certs\") pod \"52c77e6a-c529-4730-b519-66fb42f88ae8\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.956414 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-scripts\") pod \"52c77e6a-c529-4730-b519-66fb42f88ae8\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.956501 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg7q9\" (UniqueName: \"kubernetes.io/projected/52c77e6a-c529-4730-b519-66fb42f88ae8-kube-api-access-rg7q9\") pod \"52c77e6a-c529-4730-b519-66fb42f88ae8\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.956561 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-combined-ca-bundle\") pod \"52c77e6a-c529-4730-b519-66fb42f88ae8\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.956599 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-httpd-run\") pod \"52c77e6a-c529-4730-b519-66fb42f88ae8\" (UID: \"52c77e6a-c529-4730-b519-66fb42f88ae8\") " Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.960023 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "52c77e6a-c529-4730-b519-66fb42f88ae8" (UID: "52c77e6a-c529-4730-b519-66fb42f88ae8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.960387 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-logs" (OuterVolumeSpecName: "logs") pod "52c77e6a-c529-4730-b519-66fb42f88ae8" (UID: "52c77e6a-c529-4730-b519-66fb42f88ae8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.966677 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c77e6a-c529-4730-b519-66fb42f88ae8-kube-api-access-rg7q9" (OuterVolumeSpecName: "kube-api-access-rg7q9") pod "52c77e6a-c529-4730-b519-66fb42f88ae8" (UID: "52c77e6a-c529-4730-b519-66fb42f88ae8"). InnerVolumeSpecName "kube-api-access-rg7q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.966781 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-scripts" (OuterVolumeSpecName: "scripts") pod "52c77e6a-c529-4730-b519-66fb42f88ae8" (UID: "52c77e6a-c529-4730-b519-66fb42f88ae8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:04 crc kubenswrapper[4881]: I0126 13:01:04.967960 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "52c77e6a-c529-4730-b519-66fb42f88ae8" (UID: "52c77e6a-c529-4730-b519-66fb42f88ae8"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.009199 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52c77e6a-c529-4730-b519-66fb42f88ae8" (UID: "52c77e6a-c529-4730-b519-66fb42f88ae8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.017397 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-config-data" (OuterVolumeSpecName: "config-data") pod "52c77e6a-c529-4730-b519-66fb42f88ae8" (UID: "52c77e6a-c529-4730-b519-66fb42f88ae8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.050646 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "52c77e6a-c529-4730-b519-66fb42f88ae8" (UID: "52c77e6a-c529-4730-b519-66fb42f88ae8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.059156 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.059183 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg7q9\" (UniqueName: \"kubernetes.io/projected/52c77e6a-c529-4730-b519-66fb42f88ae8-kube-api-access-rg7q9\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.059193 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.059204 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.059213 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.059222 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52c77e6a-c529-4730-b519-66fb42f88ae8-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.059249 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.059258 4881 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52c77e6a-c529-4730-b519-66fb42f88ae8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.074557 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490541-c6h96" event={"ID":"2c2380fd-0233-4942-8e8a-433cc3b15925","Type":"ContainerStarted","Data":"119c6b1ba63b18998cc7087dbf56ea3921e9c455032dd2dd37eb01f747a142eb"} Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.074600 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490541-c6h96" event={"ID":"2c2380fd-0233-4942-8e8a-433cc3b15925","Type":"ContainerStarted","Data":"211dce3a837f318265cc73bd9644da9d9c80463da7fe32d08a275ebb57aa48d8"} Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.085172 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55f986558f-qqwqs" event={"ID":"4b3ea251-a4e4-4e4d-a21f-a239f80690e1","Type":"ContainerStarted","Data":"aab133ee3076554192d5f3866f068191ab89a56694d7ecb7c9e7adb1ef038b27"} Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.085211 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55f986558f-qqwqs" event={"ID":"4b3ea251-a4e4-4e4d-a21f-a239f80690e1","Type":"ContainerStarted","Data":"e5c9d80b39c7ddc0588f4ea258592cd5f243aa9fef3453a56788587dc444cd63"} Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.085226 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55f986558f-qqwqs" event={"ID":"4b3ea251-a4e4-4e4d-a21f-a239f80690e1","Type":"ContainerStarted","Data":"2640955f629bf044288c70759cc24b6c2a2378a761fe2078eca0431876c88fc1"} Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.086223 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.086257 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.091233 4881 generic.go:334] "Generic (PLEG): container finished" podID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerID="b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958" exitCode=0 Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.091295 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52c77e6a-c529-4730-b519-66fb42f88ae8","Type":"ContainerDied","Data":"b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958"} Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.091312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52c77e6a-c529-4730-b519-66fb42f88ae8","Type":"ContainerDied","Data":"40e680dd312d82ab8d1dfdaf4c94b36b012d9884f00abc97f1cc532d1b1efad5"} Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.091328 4881 scope.go:117] "RemoveContainer" containerID="b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.091348 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.092428 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490541-c6h96" podStartSLOduration=5.092411711 podStartE2EDuration="5.092411711s" podCreationTimestamp="2026-01-26 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:01:05.087243596 +0000 UTC m=+1537.566553622" watchObservedRunningTime="2026-01-26 13:01:05.092411711 +0000 UTC m=+1537.571721737" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.094308 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.095338 4881 generic.go:334] "Generic (PLEG): container finished" podID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" containerID="26da4adceccb321e69d4fa0372331e812903bba7f08ebf3026b578a05be23636" exitCode=0 Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.095419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c0866d3c-259b-4ac0-a8e3-79d9da065d7b","Type":"ContainerDied","Data":"26da4adceccb321e69d4fa0372331e812903bba7f08ebf3026b578a05be23636"} Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.116551 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-55f986558f-qqwqs" podStartSLOduration=10.116531274 podStartE2EDuration="10.116531274s" podCreationTimestamp="2026-01-26 13:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:01:05.10393448 +0000 UTC m=+1537.583244496" watchObservedRunningTime="2026-01-26 13:01:05.116531274 +0000 UTC m=+1537.595841300" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.157973 4881 scope.go:117] "RemoveContainer" containerID="be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.161589 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.173084 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.193349 4881 scope.go:117] "RemoveContainer" containerID="b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958" Jan 26 13:01:05 crc kubenswrapper[4881]: E0126 13:01:05.193755 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958\": container with ID starting with b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958 not found: ID does not exist" containerID="b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.193780 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958"} err="failed to get container status \"b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958\": rpc error: code = NotFound desc = could not find container \"b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958\": container with ID starting with b1b39d8b2f35ca9389cf49410a2483a78ac5beb27d9d58770e17a9eea4a86958 not found: ID does not exist" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.193801 4881 scope.go:117] "RemoveContainer" containerID="be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5" Jan 26 13:01:05 crc kubenswrapper[4881]: E0126 13:01:05.194634 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5\": container with ID starting with be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5 not found: ID does not exist" containerID="be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.194657 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5"} err="failed to get container status \"be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5\": rpc error: code = NotFound desc = could not find container \"be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5\": container with ID starting with be720bbf4a81b5b99c989f5e60b74962c38a15117f578a4fdee47725647ca9f5 not found: ID does not exist" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.223583 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.235730 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:01:05 crc kubenswrapper[4881]: E0126 13:01:05.236214 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-log" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.236225 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-log" Jan 26 13:01:05 crc kubenswrapper[4881]: E0126 13:01:05.236238 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-httpd" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.236244 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-httpd" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.236409 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-httpd" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.236425 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" containerName="glance-log" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.238948 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.245122 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.245530 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.251121 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.311118 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.368847 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.368936 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82a0349b-c23f-4b1d-991e-f738d5c1ecee-logs\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.368989 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.369016 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-265sx\" (UniqueName: \"kubernetes.io/projected/82a0349b-c23f-4b1d-991e-f738d5c1ecee-kube-api-access-265sx\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.369036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.369053 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82a0349b-c23f-4b1d-991e-f738d5c1ecee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.369098 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.369124 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.470573 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-config-data\") pod \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.470609 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.470666 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-logs\") pod \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.470690 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-combined-ca-bundle\") pod \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.470761 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-scripts\") pod \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.470839 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppb9k\" (UniqueName: \"kubernetes.io/projected/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-kube-api-access-ppb9k\") pod \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.470905 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-httpd-run\") pod \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.470972 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-public-tls-certs\") pod \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.471217 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-265sx\" (UniqueName: \"kubernetes.io/projected/82a0349b-c23f-4b1d-991e-f738d5c1ecee-kube-api-access-265sx\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.471249 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.471270 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82a0349b-c23f-4b1d-991e-f738d5c1ecee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.471318 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.471343 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.471366 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.471425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82a0349b-c23f-4b1d-991e-f738d5c1ecee-logs\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.471474 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.471615 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.473111 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82a0349b-c23f-4b1d-991e-f738d5c1ecee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.473422 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82a0349b-c23f-4b1d-991e-f738d5c1ecee-logs\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.475752 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c0866d3c-259b-4ac0-a8e3-79d9da065d7b" (UID: "c0866d3c-259b-4ac0-a8e3-79d9da065d7b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.476865 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-logs" (OuterVolumeSpecName: "logs") pod "c0866d3c-259b-4ac0-a8e3-79d9da065d7b" (UID: "c0866d3c-259b-4ac0-a8e3-79d9da065d7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.485335 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "c0866d3c-259b-4ac0-a8e3-79d9da065d7b" (UID: "c0866d3c-259b-4ac0-a8e3-79d9da065d7b"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.486042 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.499693 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-265sx\" (UniqueName: \"kubernetes.io/projected/82a0349b-c23f-4b1d-991e-f738d5c1ecee-kube-api-access-265sx\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.500084 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.502090 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.504078 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-scripts" (OuterVolumeSpecName: "scripts") pod "c0866d3c-259b-4ac0-a8e3-79d9da065d7b" (UID: "c0866d3c-259b-4ac0-a8e3-79d9da065d7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.507072 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-kube-api-access-ppb9k" (OuterVolumeSpecName: "kube-api-access-ppb9k") pod "c0866d3c-259b-4ac0-a8e3-79d9da065d7b" (UID: "c0866d3c-259b-4ac0-a8e3-79d9da065d7b"). InnerVolumeSpecName "kube-api-access-ppb9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.541101 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82a0349b-c23f-4b1d-991e-f738d5c1ecee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.542350 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.573080 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0866d3c-259b-4ac0-a8e3-79d9da065d7b" (UID: "c0866d3c-259b-4ac0-a8e3-79d9da065d7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.573324 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-combined-ca-bundle\") pod \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\" (UID: \"c0866d3c-259b-4ac0-a8e3-79d9da065d7b\") " Jan 26 13:01:05 crc kubenswrapper[4881]: W0126 13:01:05.573656 4881 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c0866d3c-259b-4ac0-a8e3-79d9da065d7b/volumes/kubernetes.io~secret/combined-ca-bundle Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.573705 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0866d3c-259b-4ac0-a8e3-79d9da065d7b" (UID: "c0866d3c-259b-4ac0-a8e3-79d9da065d7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.574933 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppb9k\" (UniqueName: \"kubernetes.io/projected/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-kube-api-access-ppb9k\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.575015 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.575090 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.575145 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.575215 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.575280 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.593734 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"82a0349b-c23f-4b1d-991e-f738d5c1ecee\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.609292 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c0866d3c-259b-4ac0-a8e3-79d9da065d7b" (UID: "c0866d3c-259b-4ac0-a8e3-79d9da065d7b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.613206 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.622685 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-config-data" (OuterVolumeSpecName: "config-data") pod "c0866d3c-259b-4ac0-a8e3-79d9da065d7b" (UID: "c0866d3c-259b-4ac0-a8e3-79d9da065d7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.677604 4881 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.677638 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0866d3c-259b-4ac0-a8e3-79d9da065d7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.677649 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:05 crc kubenswrapper[4881]: I0126 13:01:05.869112 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.100827 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52c77e6a-c529-4730-b519-66fb42f88ae8" path="/var/lib/kubelet/pods/52c77e6a-c529-4730-b519-66fb42f88ae8/volumes" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.101778 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df4625b-a11a-4524-890a-48e2307edddb" path="/var/lib/kubelet/pods/9df4625b-a11a-4524-890a-48e2307edddb/volumes" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.121008 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c41995a8-1c28-4783-849e-868a97831825","Type":"ContainerStarted","Data":"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e"} Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.121060 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c41995a8-1c28-4783-849e-868a97831825","Type":"ContainerStarted","Data":"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff"} Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.121471 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c41995a8-1c28-4783-849e-868a97831825","Type":"ContainerStarted","Data":"25b4c13a203486f206e5a75b97eae61956af0f70f886b7e839cf983b0ba264fc"} Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.124497 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.125220 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c0866d3c-259b-4ac0-a8e3-79d9da065d7b","Type":"ContainerDied","Data":"1fae2456b29c42506310e0e66e9991fac9ccce2ab6840a5264807124e8c4e3f3"} Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.125255 4881 scope.go:117] "RemoveContainer" containerID="26da4adceccb321e69d4fa0372331e812903bba7f08ebf3026b578a05be23636" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.153014 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.166957 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.173778 4881 scope.go:117] "RemoveContainer" containerID="e83d5a9cb8bdfbd6e365b8ea46f86ac0834611212f168b77f75938bb384bd2b0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.180104 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:01:06 crc kubenswrapper[4881]: E0126 13:01:06.180647 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" containerName="glance-log" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.180672 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" containerName="glance-log" Jan 26 13:01:06 crc kubenswrapper[4881]: E0126 13:01:06.180695 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" containerName="glance-httpd" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.180705 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" containerName="glance-httpd" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.180948 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" containerName="glance-log" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.180976 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" containerName="glance-httpd" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.182280 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.185637 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.185937 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.189678 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.298715 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.298851 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.298890 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kqkm\" (UniqueName: \"kubernetes.io/projected/cbfc472b-0aa5-4053-88cb-6efd65de5e79-kube-api-access-5kqkm\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.298925 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.298952 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbfc472b-0aa5-4053-88cb-6efd65de5e79-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.299005 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.299032 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.299070 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbfc472b-0aa5-4053-88cb-6efd65de5e79-logs\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.400831 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.400890 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbfc472b-0aa5-4053-88cb-6efd65de5e79-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.400928 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.400955 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.400984 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbfc472b-0aa5-4053-88cb-6efd65de5e79-logs\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.401021 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.401099 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.401122 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kqkm\" (UniqueName: \"kubernetes.io/projected/cbfc472b-0aa5-4053-88cb-6efd65de5e79-kube-api-access-5kqkm\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.401586 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbfc472b-0aa5-4053-88cb-6efd65de5e79-logs\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.401647 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.401779 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbfc472b-0aa5-4053-88cb-6efd65de5e79-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.416212 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.418644 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.420728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.421998 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbfc472b-0aa5-4053-88cb-6efd65de5e79-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.456125 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kqkm\" (UniqueName: \"kubernetes.io/projected/cbfc472b-0aa5-4053-88cb-6efd65de5e79-kube-api-access-5kqkm\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.495415 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cbfc472b-0aa5-4053-88cb-6efd65de5e79\") " pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.509323 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:01:06 crc kubenswrapper[4881]: I0126 13:01:06.611275 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:01:07 crc kubenswrapper[4881]: I0126 13:01:07.154259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c41995a8-1c28-4783-849e-868a97831825","Type":"ContainerStarted","Data":"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3"} Jan 26 13:01:07 crc kubenswrapper[4881]: I0126 13:01:07.156317 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82a0349b-c23f-4b1d-991e-f738d5c1ecee","Type":"ContainerStarted","Data":"6e74c23df940f32750430488a5af533e5a6129b0c5f9f8b1bfbb3e8ad205c984"} Jan 26 13:01:07 crc kubenswrapper[4881]: I0126 13:01:07.214866 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:01:08 crc kubenswrapper[4881]: I0126 13:01:08.101835 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0866d3c-259b-4ac0-a8e3-79d9da065d7b" path="/var/lib/kubelet/pods/c0866d3c-259b-4ac0-a8e3-79d9da065d7b/volumes" Jan 26 13:01:08 crc kubenswrapper[4881]: I0126 13:01:08.196323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbfc472b-0aa5-4053-88cb-6efd65de5e79","Type":"ContainerStarted","Data":"2bf2c8509d38d73d9fe7a0e01507f8233fe0823f31c577d50d18a217cbdb250a"} Jan 26 13:01:08 crc kubenswrapper[4881]: I0126 13:01:08.198147 4881 generic.go:334] "Generic (PLEG): container finished" podID="2c2380fd-0233-4942-8e8a-433cc3b15925" containerID="119c6b1ba63b18998cc7087dbf56ea3921e9c455032dd2dd37eb01f747a142eb" exitCode=0 Jan 26 13:01:08 crc kubenswrapper[4881]: I0126 13:01:08.198194 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490541-c6h96" event={"ID":"2c2380fd-0233-4942-8e8a-433cc3b15925","Type":"ContainerDied","Data":"119c6b1ba63b18998cc7087dbf56ea3921e9c455032dd2dd37eb01f747a142eb"} Jan 26 13:01:08 crc kubenswrapper[4881]: I0126 13:01:08.200148 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82a0349b-c23f-4b1d-991e-f738d5c1ecee","Type":"ContainerStarted","Data":"74f2e19c2bcf47e8af0db0191455cd184ccdd8bab8e79c14da3dbcab4346a594"} Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.214935 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbfc472b-0aa5-4053-88cb-6efd65de5e79","Type":"ContainerStarted","Data":"0fa12778e8dad557bb357141a76209c24dd46df7143f5f7c46db3faf58e4ec64"} Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.219155 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c41995a8-1c28-4783-849e-868a97831825","Type":"ContainerStarted","Data":"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361"} Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.219205 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.238931 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8946224470000002 podStartE2EDuration="5.238916013s" podCreationTimestamp="2026-01-26 13:01:04 +0000 UTC" firstStartedPulling="2026-01-26 13:01:05.551745881 +0000 UTC m=+1538.031055907" lastFinishedPulling="2026-01-26 13:01:08.896039447 +0000 UTC m=+1541.375349473" observedRunningTime="2026-01-26 13:01:09.238036951 +0000 UTC m=+1541.717346977" watchObservedRunningTime="2026-01-26 13:01:09.238916013 +0000 UTC m=+1541.718226039" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.605140 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.776695 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxb7b\" (UniqueName: \"kubernetes.io/projected/2c2380fd-0233-4942-8e8a-433cc3b15925-kube-api-access-sxb7b\") pod \"2c2380fd-0233-4942-8e8a-433cc3b15925\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.777036 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-combined-ca-bundle\") pod \"2c2380fd-0233-4942-8e8a-433cc3b15925\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.777124 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-config-data\") pod \"2c2380fd-0233-4942-8e8a-433cc3b15925\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.777872 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-fernet-keys\") pod \"2c2380fd-0233-4942-8e8a-433cc3b15925\" (UID: \"2c2380fd-0233-4942-8e8a-433cc3b15925\") " Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.782575 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c2380fd-0233-4942-8e8a-433cc3b15925-kube-api-access-sxb7b" (OuterVolumeSpecName: "kube-api-access-sxb7b") pod "2c2380fd-0233-4942-8e8a-433cc3b15925" (UID: "2c2380fd-0233-4942-8e8a-433cc3b15925"). InnerVolumeSpecName "kube-api-access-sxb7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.783313 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2c2380fd-0233-4942-8e8a-433cc3b15925" (UID: "2c2380fd-0233-4942-8e8a-433cc3b15925"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.824617 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c2380fd-0233-4942-8e8a-433cc3b15925" (UID: "2c2380fd-0233-4942-8e8a-433cc3b15925"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.861970 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-config-data" (OuterVolumeSpecName: "config-data") pod "2c2380fd-0233-4942-8e8a-433cc3b15925" (UID: "2c2380fd-0233-4942-8e8a-433cc3b15925"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.880203 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.880242 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.880255 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxb7b\" (UniqueName: \"kubernetes.io/projected/2c2380fd-0233-4942-8e8a-433cc3b15925-kube-api-access-sxb7b\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.880267 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2380fd-0233-4942-8e8a-433cc3b15925-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:09 crc kubenswrapper[4881]: I0126 13:01:09.920937 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:01:10 crc kubenswrapper[4881]: I0126 13:01:10.234753 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490541-c6h96" Jan 26 13:01:10 crc kubenswrapper[4881]: I0126 13:01:10.234950 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490541-c6h96" event={"ID":"2c2380fd-0233-4942-8e8a-433cc3b15925","Type":"ContainerDied","Data":"211dce3a837f318265cc73bd9644da9d9c80463da7fe32d08a275ebb57aa48d8"} Jan 26 13:01:10 crc kubenswrapper[4881]: I0126 13:01:10.235642 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="211dce3a837f318265cc73bd9644da9d9c80463da7fe32d08a275ebb57aa48d8" Jan 26 13:01:10 crc kubenswrapper[4881]: I0126 13:01:10.238312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82a0349b-c23f-4b1d-991e-f738d5c1ecee","Type":"ContainerStarted","Data":"8faa9149a1b45e4928447e8e940630e01ecacd7e61ff850c3862ff83cfebc3a7"} Jan 26 13:01:10 crc kubenswrapper[4881]: I0126 13:01:10.258992 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbfc472b-0aa5-4053-88cb-6efd65de5e79","Type":"ContainerStarted","Data":"50a0b93793a8120123ad49d0a8879c978504564c852c03c05859e027706cccb2"} Jan 26 13:01:10 crc kubenswrapper[4881]: I0126 13:01:10.280946 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.280930383 podStartE2EDuration="5.280930383s" podCreationTimestamp="2026-01-26 13:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:01:10.264193909 +0000 UTC m=+1542.743503935" watchObservedRunningTime="2026-01-26 13:01:10.280930383 +0000 UTC m=+1542.760240409" Jan 26 13:01:10 crc kubenswrapper[4881]: I0126 13:01:10.310619 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.31060396 podStartE2EDuration="4.31060396s" podCreationTimestamp="2026-01-26 13:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:01:10.300305371 +0000 UTC m=+1542.779615397" watchObservedRunningTime="2026-01-26 13:01:10.31060396 +0000 UTC m=+1542.789913986" Jan 26 13:01:10 crc kubenswrapper[4881]: I0126 13:01:10.841768 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:01:10 crc kubenswrapper[4881]: I0126 13:01:10.844347 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55f986558f-qqwqs" Jan 26 13:01:11 crc kubenswrapper[4881]: I0126 13:01:11.692195 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:12 crc kubenswrapper[4881]: I0126 13:01:12.275218 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="ceilometer-central-agent" containerID="cri-o://16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff" gracePeriod=30 Jan 26 13:01:12 crc kubenswrapper[4881]: I0126 13:01:12.275279 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="proxy-httpd" containerID="cri-o://4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361" gracePeriod=30 Jan 26 13:01:12 crc kubenswrapper[4881]: I0126 13:01:12.275324 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="sg-core" containerID="cri-o://d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3" gracePeriod=30 Jan 26 13:01:12 crc kubenswrapper[4881]: I0126 13:01:12.275365 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="ceilometer-notification-agent" containerID="cri-o://e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e" gracePeriod=30 Jan 26 13:01:12 crc kubenswrapper[4881]: I0126 13:01:12.547724 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:01:12 crc kubenswrapper[4881]: I0126 13:01:12.548237 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerName="watcher-decision-engine" containerID="cri-o://ddc6a54278ec576b876902cfe5b2c98d0a53ca4c5bf2735e778be9809870e4df" gracePeriod=30 Jan 26 13:01:12 crc kubenswrapper[4881]: E0126 13:01:12.762391 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc41995a8_1c28_4783_849e_868a97831825.slice/crio-conmon-e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e.scope\": RecentStats: unable to find data in memory cache]" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.209798 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.321874 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-kqmwz"] Jan 26 13:01:13 crc kubenswrapper[4881]: E0126 13:01:13.322278 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="sg-core" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.322297 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="sg-core" Jan 26 13:01:13 crc kubenswrapper[4881]: E0126 13:01:13.322313 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2380fd-0233-4942-8e8a-433cc3b15925" containerName="keystone-cron" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.322320 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2380fd-0233-4942-8e8a-433cc3b15925" containerName="keystone-cron" Jan 26 13:01:13 crc kubenswrapper[4881]: E0126 13:01:13.322348 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="ceilometer-notification-agent" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.322355 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="ceilometer-notification-agent" Jan 26 13:01:13 crc kubenswrapper[4881]: E0126 13:01:13.322367 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="ceilometer-central-agent" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.322372 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="ceilometer-central-agent" Jan 26 13:01:13 crc kubenswrapper[4881]: E0126 13:01:13.322384 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="proxy-httpd" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.322389 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="proxy-httpd" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.322572 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2380fd-0233-4942-8e8a-433cc3b15925" containerName="keystone-cron" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.322589 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="ceilometer-notification-agent" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.322601 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="ceilometer-central-agent" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.322611 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="proxy-httpd" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.322627 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41995a8-1c28-4783-849e-868a97831825" containerName="sg-core" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.323264 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqmwz" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334290 4881 generic.go:334] "Generic (PLEG): container finished" podID="c41995a8-1c28-4783-849e-868a97831825" containerID="4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361" exitCode=0 Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334332 4881 generic.go:334] "Generic (PLEG): container finished" podID="c41995a8-1c28-4783-849e-868a97831825" containerID="d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3" exitCode=2 Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334342 4881 generic.go:334] "Generic (PLEG): container finished" podID="c41995a8-1c28-4783-849e-868a97831825" containerID="e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e" exitCode=0 Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334351 4881 generic.go:334] "Generic (PLEG): container finished" podID="c41995a8-1c28-4783-849e-868a97831825" containerID="16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff" exitCode=0 Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334376 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c41995a8-1c28-4783-849e-868a97831825","Type":"ContainerDied","Data":"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361"} Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334406 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c41995a8-1c28-4783-849e-868a97831825","Type":"ContainerDied","Data":"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3"} Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334420 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c41995a8-1c28-4783-849e-868a97831825","Type":"ContainerDied","Data":"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e"} Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334432 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c41995a8-1c28-4783-849e-868a97831825","Type":"ContainerDied","Data":"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff"} Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334443 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c41995a8-1c28-4783-849e-868a97831825","Type":"ContainerDied","Data":"25b4c13a203486f206e5a75b97eae61956af0f70f886b7e839cf983b0ba264fc"} Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334460 4881 scope.go:117] "RemoveContainer" containerID="4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.334680 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.338844 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kqmwz"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.357977 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-log-httpd\") pod \"c41995a8-1c28-4783-849e-868a97831825\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.358454 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-scripts\") pod \"c41995a8-1c28-4783-849e-868a97831825\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.358477 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-combined-ca-bundle\") pod \"c41995a8-1c28-4783-849e-868a97831825\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.358523 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-run-httpd\") pod \"c41995a8-1c28-4783-849e-868a97831825\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.358588 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxwwg\" (UniqueName: \"kubernetes.io/projected/c41995a8-1c28-4783-849e-868a97831825-kube-api-access-pxwwg\") pod \"c41995a8-1c28-4783-849e-868a97831825\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.358610 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-config-data\") pod \"c41995a8-1c28-4783-849e-868a97831825\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.358709 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-sg-core-conf-yaml\") pod \"c41995a8-1c28-4783-849e-868a97831825\" (UID: \"c41995a8-1c28-4783-849e-868a97831825\") " Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.369562 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c41995a8-1c28-4783-849e-868a97831825" (UID: "c41995a8-1c28-4783-849e-868a97831825"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.369649 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c41995a8-1c28-4783-849e-868a97831825" (UID: "c41995a8-1c28-4783-849e-868a97831825"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.388533 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-scripts" (OuterVolumeSpecName: "scripts") pod "c41995a8-1c28-4783-849e-868a97831825" (UID: "c41995a8-1c28-4783-849e-868a97831825"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.389792 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41995a8-1c28-4783-849e-868a97831825-kube-api-access-pxwwg" (OuterVolumeSpecName: "kube-api-access-pxwwg") pod "c41995a8-1c28-4783-849e-868a97831825" (UID: "c41995a8-1c28-4783-849e-868a97831825"). InnerVolumeSpecName "kube-api-access-pxwwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.428339 4881 scope.go:117] "RemoveContainer" containerID="d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.451325 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2kfpk"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.452498 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2kfpk" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.462705 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66bc1d35-a96d-4cce-98be-5d65886c6f83-operator-scripts\") pod \"nova-api-db-create-kqmwz\" (UID: \"66bc1d35-a96d-4cce-98be-5d65886c6f83\") " pod="openstack/nova-api-db-create-kqmwz" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.462816 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6qmb\" (UniqueName: \"kubernetes.io/projected/66bc1d35-a96d-4cce-98be-5d65886c6f83-kube-api-access-t6qmb\") pod \"nova-api-db-create-kqmwz\" (UID: \"66bc1d35-a96d-4cce-98be-5d65886c6f83\") " pod="openstack/nova-api-db-create-kqmwz" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.462893 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.462911 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.462921 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c41995a8-1c28-4783-849e-868a97831825-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.462929 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxwwg\" (UniqueName: \"kubernetes.io/projected/c41995a8-1c28-4783-849e-868a97831825-kube-api-access-pxwwg\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.482887 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d401-account-create-update-6knzf"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.484091 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d401-account-create-update-6knzf" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.487876 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.525606 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2kfpk"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.540738 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d401-account-create-update-6knzf"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.546611 4881 scope.go:117] "RemoveContainer" containerID="e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.567690 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6qmb\" (UniqueName: \"kubernetes.io/projected/66bc1d35-a96d-4cce-98be-5d65886c6f83-kube-api-access-t6qmb\") pod \"nova-api-db-create-kqmwz\" (UID: \"66bc1d35-a96d-4cce-98be-5d65886c6f83\") " pod="openstack/nova-api-db-create-kqmwz" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.567758 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nz54\" (UniqueName: \"kubernetes.io/projected/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-kube-api-access-7nz54\") pod \"nova-cell0-db-create-2kfpk\" (UID: \"d3bca628-c4a8-4a09-bc59-3b0f2627adf4\") " pod="openstack/nova-cell0-db-create-2kfpk" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.567782 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-operator-scripts\") pod \"nova-cell0-db-create-2kfpk\" (UID: \"d3bca628-c4a8-4a09-bc59-3b0f2627adf4\") " pod="openstack/nova-cell0-db-create-2kfpk" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.567817 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9089843-0624-44fe-a41f-78746490b5be-operator-scripts\") pod \"nova-api-d401-account-create-update-6knzf\" (UID: \"c9089843-0624-44fe-a41f-78746490b5be\") " pod="openstack/nova-api-d401-account-create-update-6knzf" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.567874 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66bc1d35-a96d-4cce-98be-5d65886c6f83-operator-scripts\") pod \"nova-api-db-create-kqmwz\" (UID: \"66bc1d35-a96d-4cce-98be-5d65886c6f83\") " pod="openstack/nova-api-db-create-kqmwz" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.567917 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swbgc\" (UniqueName: \"kubernetes.io/projected/c9089843-0624-44fe-a41f-78746490b5be-kube-api-access-swbgc\") pod \"nova-api-d401-account-create-update-6knzf\" (UID: \"c9089843-0624-44fe-a41f-78746490b5be\") " pod="openstack/nova-api-d401-account-create-update-6knzf" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.569124 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66bc1d35-a96d-4cce-98be-5d65886c6f83-operator-scripts\") pod \"nova-api-db-create-kqmwz\" (UID: \"66bc1d35-a96d-4cce-98be-5d65886c6f83\") " pod="openstack/nova-api-db-create-kqmwz" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.580689 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c41995a8-1c28-4783-849e-868a97831825" (UID: "c41995a8-1c28-4783-849e-868a97831825"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.591309 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-nzs96"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.592527 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nzs96" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.616527 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6qmb\" (UniqueName: \"kubernetes.io/projected/66bc1d35-a96d-4cce-98be-5d65886c6f83-kube-api-access-t6qmb\") pod \"nova-api-db-create-kqmwz\" (UID: \"66bc1d35-a96d-4cce-98be-5d65886c6f83\") " pod="openstack/nova-api-db-create-kqmwz" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.634468 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nzs96"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.667175 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqmwz" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.669649 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nz54\" (UniqueName: \"kubernetes.io/projected/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-kube-api-access-7nz54\") pod \"nova-cell0-db-create-2kfpk\" (UID: \"d3bca628-c4a8-4a09-bc59-3b0f2627adf4\") " pod="openstack/nova-cell0-db-create-2kfpk" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.669689 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-operator-scripts\") pod \"nova-cell0-db-create-2kfpk\" (UID: \"d3bca628-c4a8-4a09-bc59-3b0f2627adf4\") " pod="openstack/nova-cell0-db-create-2kfpk" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.669722 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9089843-0624-44fe-a41f-78746490b5be-operator-scripts\") pod \"nova-api-d401-account-create-update-6knzf\" (UID: \"c9089843-0624-44fe-a41f-78746490b5be\") " pod="openstack/nova-api-d401-account-create-update-6knzf" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.669752 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fncgh\" (UniqueName: \"kubernetes.io/projected/4376f3bd-20c9-41f8-a1d1-eae76560d137-kube-api-access-fncgh\") pod \"nova-cell1-db-create-nzs96\" (UID: \"4376f3bd-20c9-41f8-a1d1-eae76560d137\") " pod="openstack/nova-cell1-db-create-nzs96" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.669822 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4376f3bd-20c9-41f8-a1d1-eae76560d137-operator-scripts\") pod \"nova-cell1-db-create-nzs96\" (UID: \"4376f3bd-20c9-41f8-a1d1-eae76560d137\") " pod="openstack/nova-cell1-db-create-nzs96" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.669850 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swbgc\" (UniqueName: \"kubernetes.io/projected/c9089843-0624-44fe-a41f-78746490b5be-kube-api-access-swbgc\") pod \"nova-api-d401-account-create-update-6knzf\" (UID: \"c9089843-0624-44fe-a41f-78746490b5be\") " pod="openstack/nova-api-d401-account-create-update-6knzf" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.669949 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.671246 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-operator-scripts\") pod \"nova-cell0-db-create-2kfpk\" (UID: \"d3bca628-c4a8-4a09-bc59-3b0f2627adf4\") " pod="openstack/nova-cell0-db-create-2kfpk" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.671702 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9089843-0624-44fe-a41f-78746490b5be-operator-scripts\") pod \"nova-api-d401-account-create-update-6knzf\" (UID: \"c9089843-0624-44fe-a41f-78746490b5be\") " pod="openstack/nova-api-d401-account-create-update-6knzf" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.695437 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c41995a8-1c28-4783-849e-868a97831825" (UID: "c41995a8-1c28-4783-849e-868a97831825"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.699346 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swbgc\" (UniqueName: \"kubernetes.io/projected/c9089843-0624-44fe-a41f-78746490b5be-kube-api-access-swbgc\") pod \"nova-api-d401-account-create-update-6knzf\" (UID: \"c9089843-0624-44fe-a41f-78746490b5be\") " pod="openstack/nova-api-d401-account-create-update-6knzf" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.702996 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nz54\" (UniqueName: \"kubernetes.io/projected/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-kube-api-access-7nz54\") pod \"nova-cell0-db-create-2kfpk\" (UID: \"d3bca628-c4a8-4a09-bc59-3b0f2627adf4\") " pod="openstack/nova-cell0-db-create-2kfpk" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.703033 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-70ca-account-create-update-r686d"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.712751 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-70ca-account-create-update-r686d" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.714484 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.715760 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-70ca-account-create-update-r686d"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.716622 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-config-data" (OuterVolumeSpecName: "config-data") pod "c41995a8-1c28-4783-849e-868a97831825" (UID: "c41995a8-1c28-4783-849e-868a97831825"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.719565 4881 scope.go:117] "RemoveContainer" containerID="16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.750950 4881 scope.go:117] "RemoveContainer" containerID="4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361" Jan 26 13:01:13 crc kubenswrapper[4881]: E0126 13:01:13.751993 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361\": container with ID starting with 4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361 not found: ID does not exist" containerID="4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.752029 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361"} err="failed to get container status \"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361\": rpc error: code = NotFound desc = could not find container \"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361\": container with ID starting with 4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361 not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.752051 4881 scope.go:117] "RemoveContainer" containerID="d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3" Jan 26 13:01:13 crc kubenswrapper[4881]: E0126 13:01:13.752757 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3\": container with ID starting with d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3 not found: ID does not exist" containerID="d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.752783 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3"} err="failed to get container status \"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3\": rpc error: code = NotFound desc = could not find container \"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3\": container with ID starting with d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3 not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.752798 4881 scope.go:117] "RemoveContainer" containerID="e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e" Jan 26 13:01:13 crc kubenswrapper[4881]: E0126 13:01:13.752981 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e\": container with ID starting with e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e not found: ID does not exist" containerID="e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.752998 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e"} err="failed to get container status \"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e\": rpc error: code = NotFound desc = could not find container \"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e\": container with ID starting with e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.753185 4881 scope.go:117] "RemoveContainer" containerID="16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff" Jan 26 13:01:13 crc kubenswrapper[4881]: E0126 13:01:13.753346 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff\": container with ID starting with 16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff not found: ID does not exist" containerID="16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.753366 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff"} err="failed to get container status \"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff\": rpc error: code = NotFound desc = could not find container \"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff\": container with ID starting with 16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.753379 4881 scope.go:117] "RemoveContainer" containerID="4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.753627 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361"} err="failed to get container status \"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361\": rpc error: code = NotFound desc = could not find container \"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361\": container with ID starting with 4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361 not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.753645 4881 scope.go:117] "RemoveContainer" containerID="d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.753807 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3"} err="failed to get container status \"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3\": rpc error: code = NotFound desc = could not find container \"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3\": container with ID starting with d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3 not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.753827 4881 scope.go:117] "RemoveContainer" containerID="e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.753995 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e"} err="failed to get container status \"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e\": rpc error: code = NotFound desc = could not find container \"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e\": container with ID starting with e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.754013 4881 scope.go:117] "RemoveContainer" containerID="16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.754261 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff"} err="failed to get container status \"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff\": rpc error: code = NotFound desc = could not find container \"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff\": container with ID starting with 16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.754281 4881 scope.go:117] "RemoveContainer" containerID="4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.754479 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361"} err="failed to get container status \"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361\": rpc error: code = NotFound desc = could not find container \"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361\": container with ID starting with 4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361 not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.754496 4881 scope.go:117] "RemoveContainer" containerID="d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.755207 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3"} err="failed to get container status \"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3\": rpc error: code = NotFound desc = could not find container \"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3\": container with ID starting with d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3 not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.755243 4881 scope.go:117] "RemoveContainer" containerID="e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.756014 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e"} err="failed to get container status \"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e\": rpc error: code = NotFound desc = could not find container \"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e\": container with ID starting with e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.756036 4881 scope.go:117] "RemoveContainer" containerID="16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.756238 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff"} err="failed to get container status \"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff\": rpc error: code = NotFound desc = could not find container \"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff\": container with ID starting with 16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.756266 4881 scope.go:117] "RemoveContainer" containerID="4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.756440 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361"} err="failed to get container status \"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361\": rpc error: code = NotFound desc = could not find container \"4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361\": container with ID starting with 4af2766e6320876b88ac05b575600e702e35926668162121e6167defb7965361 not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.756456 4881 scope.go:117] "RemoveContainer" containerID="d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.756670 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3"} err="failed to get container status \"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3\": rpc error: code = NotFound desc = could not find container \"d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3\": container with ID starting with d71980f032d9ff369a541eaee5c3a7853506d699bec36c75ed1c70c132feece3 not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.756737 4881 scope.go:117] "RemoveContainer" containerID="e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.756899 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e"} err="failed to get container status \"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e\": rpc error: code = NotFound desc = could not find container \"e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e\": container with ID starting with e78c5c09f6eb44a7a38bc3c2bcbb10090e7dc8ec372cf0c6ec049235e536a71e not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.756918 4881 scope.go:117] "RemoveContainer" containerID="16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.757064 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff"} err="failed to get container status \"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff\": rpc error: code = NotFound desc = could not find container \"16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff\": container with ID starting with 16f0a6b74070d623cb7d58ff45c1ec1e0c428d5322fe7a5f0c55b38b609ed3ff not found: ID does not exist" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.773994 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4376f3bd-20c9-41f8-a1d1-eae76560d137-operator-scripts\") pod \"nova-cell1-db-create-nzs96\" (UID: \"4376f3bd-20c9-41f8-a1d1-eae76560d137\") " pod="openstack/nova-cell1-db-create-nzs96" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.774799 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fncgh\" (UniqueName: \"kubernetes.io/projected/4376f3bd-20c9-41f8-a1d1-eae76560d137-kube-api-access-fncgh\") pod \"nova-cell1-db-create-nzs96\" (UID: \"4376f3bd-20c9-41f8-a1d1-eae76560d137\") " pod="openstack/nova-cell1-db-create-nzs96" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.774846 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.774665 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4376f3bd-20c9-41f8-a1d1-eae76560d137-operator-scripts\") pod \"nova-cell1-db-create-nzs96\" (UID: \"4376f3bd-20c9-41f8-a1d1-eae76560d137\") " pod="openstack/nova-cell1-db-create-nzs96" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.774860 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41995a8-1c28-4783-849e-868a97831825-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.788381 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fncgh\" (UniqueName: \"kubernetes.io/projected/4376f3bd-20c9-41f8-a1d1-eae76560d137-kube-api-access-fncgh\") pod \"nova-cell1-db-create-nzs96\" (UID: \"4376f3bd-20c9-41f8-a1d1-eae76560d137\") " pod="openstack/nova-cell1-db-create-nzs96" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.845671 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2kfpk" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.875070 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d401-account-create-update-6knzf" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.876192 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98008dc5-f3ef-436d-af31-cec258fe5743-operator-scripts\") pod \"nova-cell0-70ca-account-create-update-r686d\" (UID: \"98008dc5-f3ef-436d-af31-cec258fe5743\") " pod="openstack/nova-cell0-70ca-account-create-update-r686d" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.876270 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56dh4\" (UniqueName: \"kubernetes.io/projected/98008dc5-f3ef-436d-af31-cec258fe5743-kube-api-access-56dh4\") pod \"nova-cell0-70ca-account-create-update-r686d\" (UID: \"98008dc5-f3ef-436d-af31-cec258fe5743\") " pod="openstack/nova-cell0-70ca-account-create-update-r686d" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.910576 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0e1c-account-create-update-t8hhb"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.911891 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.915038 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.934591 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0e1c-account-create-update-t8hhb"] Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.978027 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56dh4\" (UniqueName: \"kubernetes.io/projected/98008dc5-f3ef-436d-af31-cec258fe5743-kube-api-access-56dh4\") pod \"nova-cell0-70ca-account-create-update-r686d\" (UID: \"98008dc5-f3ef-436d-af31-cec258fe5743\") " pod="openstack/nova-cell0-70ca-account-create-update-r686d" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.978294 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-operator-scripts\") pod \"nova-cell1-0e1c-account-create-update-t8hhb\" (UID: \"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e\") " pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.978887 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpxhw\" (UniqueName: \"kubernetes.io/projected/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-kube-api-access-jpxhw\") pod \"nova-cell1-0e1c-account-create-update-t8hhb\" (UID: \"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e\") " pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.979123 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98008dc5-f3ef-436d-af31-cec258fe5743-operator-scripts\") pod \"nova-cell0-70ca-account-create-update-r686d\" (UID: \"98008dc5-f3ef-436d-af31-cec258fe5743\") " pod="openstack/nova-cell0-70ca-account-create-update-r686d" Jan 26 13:01:13 crc kubenswrapper[4881]: I0126 13:01:13.980081 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98008dc5-f3ef-436d-af31-cec258fe5743-operator-scripts\") pod \"nova-cell0-70ca-account-create-update-r686d\" (UID: \"98008dc5-f3ef-436d-af31-cec258fe5743\") " pod="openstack/nova-cell0-70ca-account-create-update-r686d" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.002552 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nzs96" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.007732 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56dh4\" (UniqueName: \"kubernetes.io/projected/98008dc5-f3ef-436d-af31-cec258fe5743-kube-api-access-56dh4\") pod \"nova-cell0-70ca-account-create-update-r686d\" (UID: \"98008dc5-f3ef-436d-af31-cec258fe5743\") " pod="openstack/nova-cell0-70ca-account-create-update-r686d" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.026216 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.051719 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-70ca-account-create-update-r686d" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.055613 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.062564 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.064814 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.068918 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.069015 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.072043 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.081622 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpxhw\" (UniqueName: \"kubernetes.io/projected/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-kube-api-access-jpxhw\") pod \"nova-cell1-0e1c-account-create-update-t8hhb\" (UID: \"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e\") " pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.081801 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-operator-scripts\") pod \"nova-cell1-0e1c-account-create-update-t8hhb\" (UID: \"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e\") " pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.082416 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-operator-scripts\") pod \"nova-cell1-0e1c-account-create-update-t8hhb\" (UID: \"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e\") " pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.104766 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41995a8-1c28-4783-849e-868a97831825" path="/var/lib/kubelet/pods/c41995a8-1c28-4783-849e-868a97831825/volumes" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.107359 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpxhw\" (UniqueName: \"kubernetes.io/projected/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-kube-api-access-jpxhw\") pod \"nova-cell1-0e1c-account-create-update-t8hhb\" (UID: \"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e\") " pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.178813 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kqmwz"] Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.191165 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-scripts\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.191211 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxvj\" (UniqueName: \"kubernetes.io/projected/128b30ee-b173-421c-b871-12b2e4fb25b7-kube-api-access-kjxvj\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.191253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.191273 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-run-httpd\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.191325 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-config-data\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.191341 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.191365 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-log-httpd\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.289262 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.294257 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-config-data\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.294364 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.294457 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-log-httpd\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.294772 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-scripts\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.294859 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjxvj\" (UniqueName: \"kubernetes.io/projected/128b30ee-b173-421c-b871-12b2e4fb25b7-kube-api-access-kjxvj\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.295240 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.295383 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-run-httpd\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.295255 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-log-httpd\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.295924 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-run-httpd\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.300891 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-config-data\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.302474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-scripts\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.302942 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.312915 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.315147 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjxvj\" (UniqueName: \"kubernetes.io/projected/128b30ee-b173-421c-b871-12b2e4fb25b7-kube-api-access-kjxvj\") pod \"ceilometer-0\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.367238 4881 generic.go:334] "Generic (PLEG): container finished" podID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerID="ddc6a54278ec576b876902cfe5b2c98d0a53ca4c5bf2735e778be9809870e4df" exitCode=0 Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.367295 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801","Type":"ContainerDied","Data":"ddc6a54278ec576b876902cfe5b2c98d0a53ca4c5bf2735e778be9809870e4df"} Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.367329 4881 scope.go:117] "RemoveContainer" containerID="1155fe2ececa788009d295d6b3f0350b265a62f45095f8a599dcda64dc323a0e" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.370953 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kqmwz" event={"ID":"66bc1d35-a96d-4cce-98be-5d65886c6f83","Type":"ContainerStarted","Data":"22338695173d9809f5bca252939e45050e97f6c4ea9891da98ff6acf68ebee65"} Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.406398 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.455204 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2kfpk"] Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.623098 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d401-account-create-update-6knzf"] Jan 26 13:01:14 crc kubenswrapper[4881]: W0126 13:01:14.626845 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9089843_0624_44fe_a41f_78746490b5be.slice/crio-8517271bfd1d36e51c7bb153756f254f7ecabdf43a128ad143e3276e69d39ac5 WatchSource:0}: Error finding container 8517271bfd1d36e51c7bb153756f254f7ecabdf43a128ad143e3276e69d39ac5: Status 404 returned error can't find the container with id 8517271bfd1d36e51c7bb153756f254f7ecabdf43a128ad143e3276e69d39ac5 Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.740644 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nzs96"] Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.760626 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-70ca-account-create-update-r686d"] Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.784219 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.910172 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-logs\") pod \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.910593 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-config-data\") pod \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.910700 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-logs" (OuterVolumeSpecName: "logs") pod "25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" (UID: "25d5f80d-b0a4-4b8f-a8b7-12f0b7296801"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.910717 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-custom-prometheus-ca\") pod \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.910863 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58m7d\" (UniqueName: \"kubernetes.io/projected/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-kube-api-access-58m7d\") pod \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.911009 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-combined-ca-bundle\") pod \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\" (UID: \"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801\") " Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.911732 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.950403 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" (UID: "25d5f80d-b0a4-4b8f-a8b7-12f0b7296801"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.950958 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-kube-api-access-58m7d" (OuterVolumeSpecName: "kube-api-access-58m7d") pod "25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" (UID: "25d5f80d-b0a4-4b8f-a8b7-12f0b7296801"). InnerVolumeSpecName "kube-api-access-58m7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.957835 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" (UID: "25d5f80d-b0a4-4b8f-a8b7-12f0b7296801"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:14 crc kubenswrapper[4881]: I0126 13:01:14.998640 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0e1c-account-create-update-t8hhb"] Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.013261 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58m7d\" (UniqueName: \"kubernetes.io/projected/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-kube-api-access-58m7d\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.013288 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.013298 4881 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.022183 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.024748 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-config-data" (OuterVolumeSpecName: "config-data") pod "25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" (UID: "25d5f80d-b0a4-4b8f-a8b7-12f0b7296801"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.115337 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.199741 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5555bb9565-2bdtt" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.274637 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5795fd4b4d-xdxj4"] Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.275186 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5795fd4b4d-xdxj4" podUID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" containerName="neutron-api" containerID="cri-o://de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed" gracePeriod=30 Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.275583 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5795fd4b4d-xdxj4" podUID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" containerName="neutron-httpd" containerID="cri-o://9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f" gracePeriod=30 Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.402623 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nzs96" event={"ID":"4376f3bd-20c9-41f8-a1d1-eae76560d137","Type":"ContainerStarted","Data":"89b12c9203c78ecd99ca06a673b823dcc2ab211b39c7d2451ee234f590e867db"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.402684 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nzs96" event={"ID":"4376f3bd-20c9-41f8-a1d1-eae76560d137","Type":"ContainerStarted","Data":"9af11dfbdeb21f2f969f2c59d58771564af2f89ebd8d40602899b5326f892f45"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.406902 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" event={"ID":"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e","Type":"ContainerStarted","Data":"4f61cd5e59bc8b32b6617da04a49ec4937697b918b0b86e12a132c54e2b66346"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.406971 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" event={"ID":"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e","Type":"ContainerStarted","Data":"a4143f33553094826c67bf98b1de87989464266ad8311cd029f15bcf181001e9"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.417195 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-nzs96" podStartSLOduration=2.417176523 podStartE2EDuration="2.417176523s" podCreationTimestamp="2026-01-26 13:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:01:15.415485151 +0000 UTC m=+1547.894795177" watchObservedRunningTime="2026-01-26 13:01:15.417176523 +0000 UTC m=+1547.896486549" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.417327 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.417619 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"25d5f80d-b0a4-4b8f-a8b7-12f0b7296801","Type":"ContainerDied","Data":"b982f0f12961dd95857b60f4b7aea85726ec88affe7251bf794adc2cf5acb312"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.417671 4881 scope.go:117] "RemoveContainer" containerID="ddc6a54278ec576b876902cfe5b2c98d0a53ca4c5bf2735e778be9809870e4df" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.434794 4881 generic.go:334] "Generic (PLEG): container finished" podID="d3bca628-c4a8-4a09-bc59-3b0f2627adf4" containerID="6f30d0c94b89e9971cf376705fc4f984a75d6ab61254e0a573d568139d3f18a9" exitCode=0 Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.434875 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2kfpk" event={"ID":"d3bca628-c4a8-4a09-bc59-3b0f2627adf4","Type":"ContainerDied","Data":"6f30d0c94b89e9971cf376705fc4f984a75d6ab61254e0a573d568139d3f18a9"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.434906 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2kfpk" event={"ID":"d3bca628-c4a8-4a09-bc59-3b0f2627adf4","Type":"ContainerStarted","Data":"793cec8e5d12e9bb0b0cfd6e955727ec49eb996539611f94184060539d3dc5fd"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.459020 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" podStartSLOduration=2.459001314 podStartE2EDuration="2.459001314s" podCreationTimestamp="2026-01-26 13:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:01:15.434098932 +0000 UTC m=+1547.913408988" watchObservedRunningTime="2026-01-26 13:01:15.459001314 +0000 UTC m=+1547.938311340" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.462930 4881 generic.go:334] "Generic (PLEG): container finished" podID="66bc1d35-a96d-4cce-98be-5d65886c6f83" containerID="b187ed03cef959cc7d3b78e596f9e215e4dbea8a9ca9f53bbaf12e2e28caa6fc" exitCode=0 Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.463028 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kqmwz" event={"ID":"66bc1d35-a96d-4cce-98be-5d65886c6f83","Type":"ContainerDied","Data":"b187ed03cef959cc7d3b78e596f9e215e4dbea8a9ca9f53bbaf12e2e28caa6fc"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.473499 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-70ca-account-create-update-r686d" event={"ID":"98008dc5-f3ef-436d-af31-cec258fe5743","Type":"ContainerStarted","Data":"d27fd28e1895233826fd4ccf94574a6e5ae357e98dd77d9c8f6d1d9c896f4791"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.473565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-70ca-account-create-update-r686d" event={"ID":"98008dc5-f3ef-436d-af31-cec258fe5743","Type":"ContainerStarted","Data":"4f2ed3eee49b9d21d7d83110af3bfcda19c6320534c3d6f963bd68c78835390e"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.475850 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d401-account-create-update-6knzf" event={"ID":"c9089843-0624-44fe-a41f-78746490b5be","Type":"ContainerStarted","Data":"a2be2e490346aaf1e6e2bdedeb964943e63d69910dbdc49f232182b9095f1b29"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.475873 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d401-account-create-update-6knzf" event={"ID":"c9089843-0624-44fe-a41f-78746490b5be","Type":"ContainerStarted","Data":"8517271bfd1d36e51c7bb153756f254f7ecabdf43a128ad143e3276e69d39ac5"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.477958 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"128b30ee-b173-421c-b871-12b2e4fb25b7","Type":"ContainerStarted","Data":"8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.477985 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"128b30ee-b173-421c-b871-12b2e4fb25b7","Type":"ContainerStarted","Data":"18441c62a1f3b0ce6a28b06b8ba5f2615f04ef427c8240a217f77bc394fdb14e"} Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.522585 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-70ca-account-create-update-r686d" podStartSLOduration=2.522563269 podStartE2EDuration="2.522563269s" podCreationTimestamp="2026-01-26 13:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:01:15.510956279 +0000 UTC m=+1547.990266305" watchObservedRunningTime="2026-01-26 13:01:15.522563269 +0000 UTC m=+1548.001873295" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.822781 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.875210 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.881799 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.882809 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.887504 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:01:15 crc kubenswrapper[4881]: E0126 13:01:15.887986 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerName="watcher-decision-engine" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.887999 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerName="watcher-decision-engine" Jan 26 13:01:15 crc kubenswrapper[4881]: E0126 13:01:15.888022 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerName="watcher-decision-engine" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.888028 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerName="watcher-decision-engine" Jan 26 13:01:15 crc kubenswrapper[4881]: E0126 13:01:15.888042 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerName="watcher-decision-engine" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.888048 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerName="watcher-decision-engine" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.888247 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerName="watcher-decision-engine" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.888261 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerName="watcher-decision-engine" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.888279 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" containerName="watcher-decision-engine" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.889701 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.893234 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.896546 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.931748 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.932486 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e321419e-1316-442e-b8f1-2f4a2451203f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.932602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8wj\" (UniqueName: \"kubernetes.io/projected/e321419e-1316-442e-b8f1-2f4a2451203f-kube-api-access-th8wj\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.932629 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e321419e-1316-442e-b8f1-2f4a2451203f-logs\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.932651 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e321419e-1316-442e-b8f1-2f4a2451203f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.932695 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e321419e-1316-442e-b8f1-2f4a2451203f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:15 crc kubenswrapper[4881]: I0126 13:01:15.942796 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.035159 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e321419e-1316-442e-b8f1-2f4a2451203f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.035226 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e321419e-1316-442e-b8f1-2f4a2451203f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.035320 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e321419e-1316-442e-b8f1-2f4a2451203f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.035381 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8wj\" (UniqueName: \"kubernetes.io/projected/e321419e-1316-442e-b8f1-2f4a2451203f-kube-api-access-th8wj\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.035406 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e321419e-1316-442e-b8f1-2f4a2451203f-logs\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.035823 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e321419e-1316-442e-b8f1-2f4a2451203f-logs\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.041373 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e321419e-1316-442e-b8f1-2f4a2451203f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.045322 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e321419e-1316-442e-b8f1-2f4a2451203f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.056036 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8wj\" (UniqueName: \"kubernetes.io/projected/e321419e-1316-442e-b8f1-2f4a2451203f-kube-api-access-th8wj\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.058768 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e321419e-1316-442e-b8f1-2f4a2451203f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"e321419e-1316-442e-b8f1-2f4a2451203f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.093590 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d5f80d-b0a4-4b8f-a8b7-12f0b7296801" path="/var/lib/kubelet/pods/25d5f80d-b0a4-4b8f-a8b7-12f0b7296801/volumes" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.224697 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.517796 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.518129 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.543850 4881 generic.go:334] "Generic (PLEG): container finished" podID="98008dc5-f3ef-436d-af31-cec258fe5743" containerID="d27fd28e1895233826fd4ccf94574a6e5ae357e98dd77d9c8f6d1d9c896f4791" exitCode=0 Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.543936 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-70ca-account-create-update-r686d" event={"ID":"98008dc5-f3ef-436d-af31-cec258fe5743","Type":"ContainerDied","Data":"d27fd28e1895233826fd4ccf94574a6e5ae357e98dd77d9c8f6d1d9c896f4791"} Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.558107 4881 generic.go:334] "Generic (PLEG): container finished" podID="c9089843-0624-44fe-a41f-78746490b5be" containerID="a2be2e490346aaf1e6e2bdedeb964943e63d69910dbdc49f232182b9095f1b29" exitCode=0 Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.558165 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d401-account-create-update-6knzf" event={"ID":"c9089843-0624-44fe-a41f-78746490b5be","Type":"ContainerDied","Data":"a2be2e490346aaf1e6e2bdedeb964943e63d69910dbdc49f232182b9095f1b29"} Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.607581 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.618818 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"128b30ee-b173-421c-b871-12b2e4fb25b7","Type":"ContainerStarted","Data":"48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5"} Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.625234 4881 generic.go:334] "Generic (PLEG): container finished" podID="4376f3bd-20c9-41f8-a1d1-eae76560d137" containerID="89b12c9203c78ecd99ca06a673b823dcc2ab211b39c7d2451ee234f590e867db" exitCode=0 Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.625291 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nzs96" event={"ID":"4376f3bd-20c9-41f8-a1d1-eae76560d137","Type":"ContainerDied","Data":"89b12c9203c78ecd99ca06a673b823dcc2ab211b39c7d2451ee234f590e867db"} Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.626394 4881 generic.go:334] "Generic (PLEG): container finished" podID="96bc90dc-d5c2-412e-8ac8-a60fb254cd7e" containerID="4f61cd5e59bc8b32b6617da04a49ec4937697b918b0b86e12a132c54e2b66346" exitCode=0 Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.626435 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" event={"ID":"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e","Type":"ContainerDied","Data":"4f61cd5e59bc8b32b6617da04a49ec4937697b918b0b86e12a132c54e2b66346"} Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.640289 4881 generic.go:334] "Generic (PLEG): container finished" podID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" containerID="9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f" exitCode=0 Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.640449 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5795fd4b4d-xdxj4" event={"ID":"4065cb1b-b1ab-4fef-b77f-64ec87d80d99","Type":"ContainerDied","Data":"9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f"} Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.641924 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.641944 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.668859 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 13:01:16 crc kubenswrapper[4881]: I0126 13:01:16.747702 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.102401 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d401-account-create-update-6knzf" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.243996 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqmwz" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.274024 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9089843-0624-44fe-a41f-78746490b5be-operator-scripts\") pod \"c9089843-0624-44fe-a41f-78746490b5be\" (UID: \"c9089843-0624-44fe-a41f-78746490b5be\") " Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.274200 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swbgc\" (UniqueName: \"kubernetes.io/projected/c9089843-0624-44fe-a41f-78746490b5be-kube-api-access-swbgc\") pod \"c9089843-0624-44fe-a41f-78746490b5be\" (UID: \"c9089843-0624-44fe-a41f-78746490b5be\") " Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.279018 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9089843-0624-44fe-a41f-78746490b5be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9089843-0624-44fe-a41f-78746490b5be" (UID: "c9089843-0624-44fe-a41f-78746490b5be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.304251 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2kfpk" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.304300 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9089843-0624-44fe-a41f-78746490b5be-kube-api-access-swbgc" (OuterVolumeSpecName: "kube-api-access-swbgc") pod "c9089843-0624-44fe-a41f-78746490b5be" (UID: "c9089843-0624-44fe-a41f-78746490b5be"). InnerVolumeSpecName "kube-api-access-swbgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.376105 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66bc1d35-a96d-4cce-98be-5d65886c6f83-operator-scripts\") pod \"66bc1d35-a96d-4cce-98be-5d65886c6f83\" (UID: \"66bc1d35-a96d-4cce-98be-5d65886c6f83\") " Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.376292 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6qmb\" (UniqueName: \"kubernetes.io/projected/66bc1d35-a96d-4cce-98be-5d65886c6f83-kube-api-access-t6qmb\") pod \"66bc1d35-a96d-4cce-98be-5d65886c6f83\" (UID: \"66bc1d35-a96d-4cce-98be-5d65886c6f83\") " Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.376683 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swbgc\" (UniqueName: \"kubernetes.io/projected/c9089843-0624-44fe-a41f-78746490b5be-kube-api-access-swbgc\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.376699 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9089843-0624-44fe-a41f-78746490b5be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.377273 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66bc1d35-a96d-4cce-98be-5d65886c6f83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66bc1d35-a96d-4cce-98be-5d65886c6f83" (UID: "66bc1d35-a96d-4cce-98be-5d65886c6f83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.379785 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66bc1d35-a96d-4cce-98be-5d65886c6f83-kube-api-access-t6qmb" (OuterVolumeSpecName: "kube-api-access-t6qmb") pod "66bc1d35-a96d-4cce-98be-5d65886c6f83" (UID: "66bc1d35-a96d-4cce-98be-5d65886c6f83"). InnerVolumeSpecName "kube-api-access-t6qmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.478204 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nz54\" (UniqueName: \"kubernetes.io/projected/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-kube-api-access-7nz54\") pod \"d3bca628-c4a8-4a09-bc59-3b0f2627adf4\" (UID: \"d3bca628-c4a8-4a09-bc59-3b0f2627adf4\") " Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.478273 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-operator-scripts\") pod \"d3bca628-c4a8-4a09-bc59-3b0f2627adf4\" (UID: \"d3bca628-c4a8-4a09-bc59-3b0f2627adf4\") " Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.478747 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66bc1d35-a96d-4cce-98be-5d65886c6f83-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.478783 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6qmb\" (UniqueName: \"kubernetes.io/projected/66bc1d35-a96d-4cce-98be-5d65886c6f83-kube-api-access-t6qmb\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.478741 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3bca628-c4a8-4a09-bc59-3b0f2627adf4" (UID: "d3bca628-c4a8-4a09-bc59-3b0f2627adf4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.483356 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-kube-api-access-7nz54" (OuterVolumeSpecName: "kube-api-access-7nz54") pod "d3bca628-c4a8-4a09-bc59-3b0f2627adf4" (UID: "d3bca628-c4a8-4a09-bc59-3b0f2627adf4"). InnerVolumeSpecName "kube-api-access-7nz54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.580082 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nz54\" (UniqueName: \"kubernetes.io/projected/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-kube-api-access-7nz54\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.580117 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3bca628-c4a8-4a09-bc59-3b0f2627adf4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.652335 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2kfpk" event={"ID":"d3bca628-c4a8-4a09-bc59-3b0f2627adf4","Type":"ContainerDied","Data":"793cec8e5d12e9bb0b0cfd6e955727ec49eb996539611f94184060539d3dc5fd"} Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.652382 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="793cec8e5d12e9bb0b0cfd6e955727ec49eb996539611f94184060539d3dc5fd" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.652342 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2kfpk" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.654775 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kqmwz" event={"ID":"66bc1d35-a96d-4cce-98be-5d65886c6f83","Type":"ContainerDied","Data":"22338695173d9809f5bca252939e45050e97f6c4ea9891da98ff6acf68ebee65"} Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.654813 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22338695173d9809f5bca252939e45050e97f6c4ea9891da98ff6acf68ebee65" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.654793 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqmwz" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.656582 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d401-account-create-update-6knzf" event={"ID":"c9089843-0624-44fe-a41f-78746490b5be","Type":"ContainerDied","Data":"8517271bfd1d36e51c7bb153756f254f7ecabdf43a128ad143e3276e69d39ac5"} Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.656622 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8517271bfd1d36e51c7bb153756f254f7ecabdf43a128ad143e3276e69d39ac5" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.656688 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d401-account-create-update-6knzf" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.660067 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"e321419e-1316-442e-b8f1-2f4a2451203f","Type":"ContainerStarted","Data":"7945f72edc64b88c40c4636bea5e2287cc2dce62abc0dfd1f59e6c1979cc570c"} Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.660115 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"e321419e-1316-442e-b8f1-2f4a2451203f","Type":"ContainerStarted","Data":"2ac082f312ed4a78cd5e697a2ed0fe4a52caac0a4dc233114b447d9f726ee63d"} Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.663018 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"128b30ee-b173-421c-b871-12b2e4fb25b7","Type":"ContainerStarted","Data":"3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a"} Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.666109 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.666149 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 13:01:17 crc kubenswrapper[4881]: I0126 13:01:17.699332 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.699314962 podStartE2EDuration="2.699314962s" podCreationTimestamp="2026-01-26 13:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:01:17.688625833 +0000 UTC m=+1550.167935859" watchObservedRunningTime="2026-01-26 13:01:17.699314962 +0000 UTC m=+1550.178624988" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.116453 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nzs96" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.191410 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4376f3bd-20c9-41f8-a1d1-eae76560d137-operator-scripts\") pod \"4376f3bd-20c9-41f8-a1d1-eae76560d137\" (UID: \"4376f3bd-20c9-41f8-a1d1-eae76560d137\") " Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.191664 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fncgh\" (UniqueName: \"kubernetes.io/projected/4376f3bd-20c9-41f8-a1d1-eae76560d137-kube-api-access-fncgh\") pod \"4376f3bd-20c9-41f8-a1d1-eae76560d137\" (UID: \"4376f3bd-20c9-41f8-a1d1-eae76560d137\") " Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.195985 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4376f3bd-20c9-41f8-a1d1-eae76560d137-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4376f3bd-20c9-41f8-a1d1-eae76560d137" (UID: "4376f3bd-20c9-41f8-a1d1-eae76560d137"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.199825 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4376f3bd-20c9-41f8-a1d1-eae76560d137-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.219827 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4376f3bd-20c9-41f8-a1d1-eae76560d137-kube-api-access-fncgh" (OuterVolumeSpecName: "kube-api-access-fncgh") pod "4376f3bd-20c9-41f8-a1d1-eae76560d137" (UID: "4376f3bd-20c9-41f8-a1d1-eae76560d137"). InnerVolumeSpecName "kube-api-access-fncgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.247626 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-70ca-account-create-update-r686d" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.273355 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.301251 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fncgh\" (UniqueName: \"kubernetes.io/projected/4376f3bd-20c9-41f8-a1d1-eae76560d137-kube-api-access-fncgh\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.402387 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-operator-scripts\") pod \"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e\" (UID: \"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e\") " Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.402448 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpxhw\" (UniqueName: \"kubernetes.io/projected/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-kube-api-access-jpxhw\") pod \"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e\" (UID: \"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e\") " Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.402473 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98008dc5-f3ef-436d-af31-cec258fe5743-operator-scripts\") pod \"98008dc5-f3ef-436d-af31-cec258fe5743\" (UID: \"98008dc5-f3ef-436d-af31-cec258fe5743\") " Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.402626 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56dh4\" (UniqueName: \"kubernetes.io/projected/98008dc5-f3ef-436d-af31-cec258fe5743-kube-api-access-56dh4\") pod \"98008dc5-f3ef-436d-af31-cec258fe5743\" (UID: \"98008dc5-f3ef-436d-af31-cec258fe5743\") " Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.402880 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96bc90dc-d5c2-412e-8ac8-a60fb254cd7e" (UID: "96bc90dc-d5c2-412e-8ac8-a60fb254cd7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.403138 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.403968 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98008dc5-f3ef-436d-af31-cec258fe5743-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98008dc5-f3ef-436d-af31-cec258fe5743" (UID: "98008dc5-f3ef-436d-af31-cec258fe5743"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.406928 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-kube-api-access-jpxhw" (OuterVolumeSpecName: "kube-api-access-jpxhw") pod "96bc90dc-d5c2-412e-8ac8-a60fb254cd7e" (UID: "96bc90dc-d5c2-412e-8ac8-a60fb254cd7e"). InnerVolumeSpecName "kube-api-access-jpxhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.407941 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98008dc5-f3ef-436d-af31-cec258fe5743-kube-api-access-56dh4" (OuterVolumeSpecName: "kube-api-access-56dh4") pod "98008dc5-f3ef-436d-af31-cec258fe5743" (UID: "98008dc5-f3ef-436d-af31-cec258fe5743"). InnerVolumeSpecName "kube-api-access-56dh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.504911 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpxhw\" (UniqueName: \"kubernetes.io/projected/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e-kube-api-access-jpxhw\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.504947 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98008dc5-f3ef-436d-af31-cec258fe5743-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.504956 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56dh4\" (UniqueName: \"kubernetes.io/projected/98008dc5-f3ef-436d-af31-cec258fe5743-kube-api-access-56dh4\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.673046 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" event={"ID":"96bc90dc-d5c2-412e-8ac8-a60fb254cd7e","Type":"ContainerDied","Data":"a4143f33553094826c67bf98b1de87989464266ad8311cd029f15bcf181001e9"} Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.673080 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0e1c-account-create-update-t8hhb" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.673100 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4143f33553094826c67bf98b1de87989464266ad8311cd029f15bcf181001e9" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.674496 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-70ca-account-create-update-r686d" event={"ID":"98008dc5-f3ef-436d-af31-cec258fe5743","Type":"ContainerDied","Data":"4f2ed3eee49b9d21d7d83110af3bfcda19c6320534c3d6f963bd68c78835390e"} Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.674594 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f2ed3eee49b9d21d7d83110af3bfcda19c6320534c3d6f963bd68c78835390e" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.674510 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-70ca-account-create-update-r686d" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.676092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nzs96" event={"ID":"4376f3bd-20c9-41f8-a1d1-eae76560d137","Type":"ContainerDied","Data":"9af11dfbdeb21f2f969f2c59d58771564af2f89ebd8d40602899b5326f892f45"} Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.676136 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af11dfbdeb21f2f969f2c59d58771564af2f89ebd8d40602899b5326f892f45" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.676163 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.676179 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.676333 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nzs96" Jan 26 13:01:18 crc kubenswrapper[4881]: I0126 13:01:18.945468 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.148002 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.231985 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.546327 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.644056 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-httpd-config\") pod \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.644092 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-config\") pod \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.644180 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-ovndb-tls-certs\") pod \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.644199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-combined-ca-bundle\") pod \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.644275 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjdbz\" (UniqueName: \"kubernetes.io/projected/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-kube-api-access-sjdbz\") pod \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\" (UID: \"4065cb1b-b1ab-4fef-b77f-64ec87d80d99\") " Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.652729 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4065cb1b-b1ab-4fef-b77f-64ec87d80d99" (UID: "4065cb1b-b1ab-4fef-b77f-64ec87d80d99"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.655618 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-kube-api-access-sjdbz" (OuterVolumeSpecName: "kube-api-access-sjdbz") pod "4065cb1b-b1ab-4fef-b77f-64ec87d80d99" (UID: "4065cb1b-b1ab-4fef-b77f-64ec87d80d99"). InnerVolumeSpecName "kube-api-access-sjdbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.696454 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"128b30ee-b173-421c-b871-12b2e4fb25b7","Type":"ContainerStarted","Data":"00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7"} Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.697505 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.699040 4881 generic.go:334] "Generic (PLEG): container finished" podID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" containerID="de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed" exitCode=0 Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.699093 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.699101 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.699370 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5795fd4b4d-xdxj4" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.699843 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5795fd4b4d-xdxj4" event={"ID":"4065cb1b-b1ab-4fef-b77f-64ec87d80d99","Type":"ContainerDied","Data":"de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed"} Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.699868 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5795fd4b4d-xdxj4" event={"ID":"4065cb1b-b1ab-4fef-b77f-64ec87d80d99","Type":"ContainerDied","Data":"06e230abe3a16291d2f4ba2e9d202472185a52c2482a9e046df62bcfff1e27f1"} Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.699885 4881 scope.go:117] "RemoveContainer" containerID="9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.723822 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4065cb1b-b1ab-4fef-b77f-64ec87d80d99" (UID: "4065cb1b-b1ab-4fef-b77f-64ec87d80d99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.746178 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.362802153 podStartE2EDuration="5.746156123s" podCreationTimestamp="2026-01-26 13:01:14 +0000 UTC" firstStartedPulling="2026-01-26 13:01:15.017080774 +0000 UTC m=+1547.496390800" lastFinishedPulling="2026-01-26 13:01:18.400434744 +0000 UTC m=+1550.879744770" observedRunningTime="2026-01-26 13:01:19.731698984 +0000 UTC m=+1552.211009020" watchObservedRunningTime="2026-01-26 13:01:19.746156123 +0000 UTC m=+1552.225466149" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.748318 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.748349 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.748360 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjdbz\" (UniqueName: \"kubernetes.io/projected/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-kube-api-access-sjdbz\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.757603 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4065cb1b-b1ab-4fef-b77f-64ec87d80d99" (UID: "4065cb1b-b1ab-4fef-b77f-64ec87d80d99"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.790803 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-config" (OuterVolumeSpecName: "config") pod "4065cb1b-b1ab-4fef-b77f-64ec87d80d99" (UID: "4065cb1b-b1ab-4fef-b77f-64ec87d80d99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.801596 4881 scope.go:117] "RemoveContainer" containerID="de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.832292 4881 scope.go:117] "RemoveContainer" containerID="9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f" Jan 26 13:01:19 crc kubenswrapper[4881]: E0126 13:01:19.835929 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f\": container with ID starting with 9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f not found: ID does not exist" containerID="9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.836057 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f"} err="failed to get container status \"9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f\": rpc error: code = NotFound desc = could not find container \"9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f\": container with ID starting with 9506b68cbaa936f8a0db5d31c37f0f0fa2350fb1d37fdceafba34836e736b40f not found: ID does not exist" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.836143 4881 scope.go:117] "RemoveContainer" containerID="de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed" Jan 26 13:01:19 crc kubenswrapper[4881]: E0126 13:01:19.836523 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed\": container with ID starting with de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed not found: ID does not exist" containerID="de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.836554 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed"} err="failed to get container status \"de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed\": rpc error: code = NotFound desc = could not find container \"de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed\": container with ID starting with de44f1ef54490250ab4357497e56ac4e7101c2cb24e3df8c818bc7d17fe58aed not found: ID does not exist" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.849700 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:19 crc kubenswrapper[4881]: I0126 13:01:19.849730 4881 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4065cb1b-b1ab-4fef-b77f-64ec87d80d99-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:20 crc kubenswrapper[4881]: I0126 13:01:20.026810 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5795fd4b4d-xdxj4"] Jan 26 13:01:20 crc kubenswrapper[4881]: I0126 13:01:20.034543 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5795fd4b4d-xdxj4"] Jan 26 13:01:20 crc kubenswrapper[4881]: I0126 13:01:20.093285 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" path="/var/lib/kubelet/pods/4065cb1b-b1ab-4fef-b77f-64ec87d80d99/volumes" Jan 26 13:01:20 crc kubenswrapper[4881]: I0126 13:01:20.342631 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 13:01:20 crc kubenswrapper[4881]: I0126 13:01:20.504603 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 13:01:20 crc kubenswrapper[4881]: I0126 13:01:20.708243 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="ceilometer-central-agent" containerID="cri-o://8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c" gracePeriod=30 Jan 26 13:01:20 crc kubenswrapper[4881]: I0126 13:01:20.708331 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="ceilometer-notification-agent" containerID="cri-o://48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5" gracePeriod=30 Jan 26 13:01:20 crc kubenswrapper[4881]: I0126 13:01:20.708372 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="proxy-httpd" containerID="cri-o://00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7" gracePeriod=30 Jan 26 13:01:20 crc kubenswrapper[4881]: I0126 13:01:20.708376 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="sg-core" containerID="cri-o://3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a" gracePeriod=30 Jan 26 13:01:21 crc kubenswrapper[4881]: I0126 13:01:21.724501 4881 generic.go:334] "Generic (PLEG): container finished" podID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerID="00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7" exitCode=0 Jan 26 13:01:21 crc kubenswrapper[4881]: I0126 13:01:21.724780 4881 generic.go:334] "Generic (PLEG): container finished" podID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerID="3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a" exitCode=2 Jan 26 13:01:21 crc kubenswrapper[4881]: I0126 13:01:21.724791 4881 generic.go:334] "Generic (PLEG): container finished" podID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerID="48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5" exitCode=0 Jan 26 13:01:21 crc kubenswrapper[4881]: I0126 13:01:21.724587 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"128b30ee-b173-421c-b871-12b2e4fb25b7","Type":"ContainerDied","Data":"00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7"} Jan 26 13:01:21 crc kubenswrapper[4881]: I0126 13:01:21.724935 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"128b30ee-b173-421c-b871-12b2e4fb25b7","Type":"ContainerDied","Data":"3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a"} Jan 26 13:01:21 crc kubenswrapper[4881]: I0126 13:01:21.724961 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"128b30ee-b173-421c-b871-12b2e4fb25b7","Type":"ContainerDied","Data":"48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5"} Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.032741 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vbjb6"] Jan 26 13:01:24 crc kubenswrapper[4881]: E0126 13:01:24.034287 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9089843-0624-44fe-a41f-78746490b5be" containerName="mariadb-account-create-update" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.034319 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9089843-0624-44fe-a41f-78746490b5be" containerName="mariadb-account-create-update" Jan 26 13:01:24 crc kubenswrapper[4881]: E0126 13:01:24.034353 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bc90dc-d5c2-412e-8ac8-a60fb254cd7e" containerName="mariadb-account-create-update" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.034366 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bc90dc-d5c2-412e-8ac8-a60fb254cd7e" containerName="mariadb-account-create-update" Jan 26 13:01:24 crc kubenswrapper[4881]: E0126 13:01:24.034395 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4376f3bd-20c9-41f8-a1d1-eae76560d137" containerName="mariadb-database-create" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.034408 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4376f3bd-20c9-41f8-a1d1-eae76560d137" containerName="mariadb-database-create" Jan 26 13:01:24 crc kubenswrapper[4881]: E0126 13:01:24.034453 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" containerName="neutron-api" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.034467 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" containerName="neutron-api" Jan 26 13:01:24 crc kubenswrapper[4881]: E0126 13:01:24.034506 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98008dc5-f3ef-436d-af31-cec258fe5743" containerName="mariadb-account-create-update" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.034542 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="98008dc5-f3ef-436d-af31-cec258fe5743" containerName="mariadb-account-create-update" Jan 26 13:01:24 crc kubenswrapper[4881]: E0126 13:01:24.034589 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3bca628-c4a8-4a09-bc59-3b0f2627adf4" containerName="mariadb-database-create" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.034603 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3bca628-c4a8-4a09-bc59-3b0f2627adf4" containerName="mariadb-database-create" Jan 26 13:01:24 crc kubenswrapper[4881]: E0126 13:01:24.034620 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66bc1d35-a96d-4cce-98be-5d65886c6f83" containerName="mariadb-database-create" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.034633 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="66bc1d35-a96d-4cce-98be-5d65886c6f83" containerName="mariadb-database-create" Jan 26 13:01:24 crc kubenswrapper[4881]: E0126 13:01:24.034694 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" containerName="neutron-httpd" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.034708 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" containerName="neutron-httpd" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.035416 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="66bc1d35-a96d-4cce-98be-5d65886c6f83" containerName="mariadb-database-create" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.035463 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3bca628-c4a8-4a09-bc59-3b0f2627adf4" containerName="mariadb-database-create" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.035492 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9089843-0624-44fe-a41f-78746490b5be" containerName="mariadb-account-create-update" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.035565 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="98008dc5-f3ef-436d-af31-cec258fe5743" containerName="mariadb-account-create-update" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.035585 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4376f3bd-20c9-41f8-a1d1-eae76560d137" containerName="mariadb-database-create" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.035640 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" containerName="neutron-httpd" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.035671 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bc90dc-d5c2-412e-8ac8-a60fb254cd7e" containerName="mariadb-account-create-update" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.035688 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4065cb1b-b1ab-4fef-b77f-64ec87d80d99" containerName="neutron-api" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.037151 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.042147 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.044319 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qlk79" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.047579 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.104248 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vbjb6"] Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.224393 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs4kg\" (UniqueName: \"kubernetes.io/projected/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-kube-api-access-hs4kg\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.224787 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.224942 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-scripts\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.225089 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-config-data\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.326834 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-config-data\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.326941 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs4kg\" (UniqueName: \"kubernetes.io/projected/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-kube-api-access-hs4kg\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.327057 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.327141 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-scripts\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.333231 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-scripts\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.333374 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.333475 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-config-data\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.347423 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs4kg\" (UniqueName: \"kubernetes.io/projected/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-kube-api-access-hs4kg\") pod \"nova-cell0-conductor-db-sync-vbjb6\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.360008 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:24 crc kubenswrapper[4881]: I0126 13:01:24.842852 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vbjb6"] Jan 26 13:01:24 crc kubenswrapper[4881]: W0126 13:01:24.848875 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e0ba53f_5583_413c_bd4d_beb5b5c803ea.slice/crio-e01c3a4ab842acdb87bdb347c8f46d6834d03fa0b8d211ab089641527e466d57 WatchSource:0}: Error finding container e01c3a4ab842acdb87bdb347c8f46d6834d03fa0b8d211ab089641527e466d57: Status 404 returned error can't find the container with id e01c3a4ab842acdb87bdb347c8f46d6834d03fa0b8d211ab089641527e466d57 Jan 26 13:01:25 crc kubenswrapper[4881]: I0126 13:01:25.772418 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vbjb6" event={"ID":"6e0ba53f-5583-413c-bd4d-beb5b5c803ea","Type":"ContainerStarted","Data":"e01c3a4ab842acdb87bdb347c8f46d6834d03fa0b8d211ab089641527e466d57"} Jan 26 13:01:26 crc kubenswrapper[4881]: I0126 13:01:26.225386 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:26 crc kubenswrapper[4881]: I0126 13:01:26.262087 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:26 crc kubenswrapper[4881]: I0126 13:01:26.783546 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:26 crc kubenswrapper[4881]: I0126 13:01:26.814106 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.714449 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.796959 4881 generic.go:334] "Generic (PLEG): container finished" podID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerID="8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c" exitCode=0 Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.797025 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.797046 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"128b30ee-b173-421c-b871-12b2e4fb25b7","Type":"ContainerDied","Data":"8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c"} Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.797092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"128b30ee-b173-421c-b871-12b2e4fb25b7","Type":"ContainerDied","Data":"18441c62a1f3b0ce6a28b06b8ba5f2615f04ef427c8240a217f77bc394fdb14e"} Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.797114 4881 scope.go:117] "RemoveContainer" containerID="00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.821165 4881 scope.go:117] "RemoveContainer" containerID="3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.855167 4881 scope.go:117] "RemoveContainer" containerID="48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.879931 4881 scope.go:117] "RemoveContainer" containerID="8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.895272 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjxvj\" (UniqueName: \"kubernetes.io/projected/128b30ee-b173-421c-b871-12b2e4fb25b7-kube-api-access-kjxvj\") pod \"128b30ee-b173-421c-b871-12b2e4fb25b7\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.895342 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-run-httpd\") pod \"128b30ee-b173-421c-b871-12b2e4fb25b7\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.895426 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-sg-core-conf-yaml\") pod \"128b30ee-b173-421c-b871-12b2e4fb25b7\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.895501 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-combined-ca-bundle\") pod \"128b30ee-b173-421c-b871-12b2e4fb25b7\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.895585 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-scripts\") pod \"128b30ee-b173-421c-b871-12b2e4fb25b7\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.895710 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-log-httpd\") pod \"128b30ee-b173-421c-b871-12b2e4fb25b7\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.895738 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-config-data\") pod \"128b30ee-b173-421c-b871-12b2e4fb25b7\" (UID: \"128b30ee-b173-421c-b871-12b2e4fb25b7\") " Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.895876 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "128b30ee-b173-421c-b871-12b2e4fb25b7" (UID: "128b30ee-b173-421c-b871-12b2e4fb25b7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.896238 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "128b30ee-b173-421c-b871-12b2e4fb25b7" (UID: "128b30ee-b173-421c-b871-12b2e4fb25b7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.896334 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.901340 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-scripts" (OuterVolumeSpecName: "scripts") pod "128b30ee-b173-421c-b871-12b2e4fb25b7" (UID: "128b30ee-b173-421c-b871-12b2e4fb25b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.907248 4881 scope.go:117] "RemoveContainer" containerID="00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7" Jan 26 13:01:27 crc kubenswrapper[4881]: E0126 13:01:27.907849 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7\": container with ID starting with 00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7 not found: ID does not exist" containerID="00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.907880 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7"} err="failed to get container status \"00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7\": rpc error: code = NotFound desc = could not find container \"00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7\": container with ID starting with 00827b6c4844903d7bf83fcea3ece951352f279ab9e180746aa5b54236f6ebf7 not found: ID does not exist" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.907905 4881 scope.go:117] "RemoveContainer" containerID="3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a" Jan 26 13:01:27 crc kubenswrapper[4881]: E0126 13:01:27.908195 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a\": container with ID starting with 3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a not found: ID does not exist" containerID="3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.908245 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a"} err="failed to get container status \"3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a\": rpc error: code = NotFound desc = could not find container \"3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a\": container with ID starting with 3e7de252cc3dd52e265cd828b34b5b278d452de2da4653d07e749674c48dc29a not found: ID does not exist" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.908280 4881 scope.go:117] "RemoveContainer" containerID="48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5" Jan 26 13:01:27 crc kubenswrapper[4881]: E0126 13:01:27.908567 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5\": container with ID starting with 48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5 not found: ID does not exist" containerID="48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.908595 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5"} err="failed to get container status \"48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5\": rpc error: code = NotFound desc = could not find container \"48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5\": container with ID starting with 48ea1fe15d4c389b8bd189f5cc5ff0b485903e3c0dc80ac99eee71d936f019a5 not found: ID does not exist" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.908612 4881 scope.go:117] "RemoveContainer" containerID="8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c" Jan 26 13:01:27 crc kubenswrapper[4881]: E0126 13:01:27.908818 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c\": container with ID starting with 8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c not found: ID does not exist" containerID="8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.908843 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c"} err="failed to get container status \"8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c\": rpc error: code = NotFound desc = could not find container \"8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c\": container with ID starting with 8ce65feac66f532e39e849def16b9db69758f160afa157545897ca25694c0f2c not found: ID does not exist" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.916821 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128b30ee-b173-421c-b871-12b2e4fb25b7-kube-api-access-kjxvj" (OuterVolumeSpecName: "kube-api-access-kjxvj") pod "128b30ee-b173-421c-b871-12b2e4fb25b7" (UID: "128b30ee-b173-421c-b871-12b2e4fb25b7"). InnerVolumeSpecName "kube-api-access-kjxvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.955397 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "128b30ee-b173-421c-b871-12b2e4fb25b7" (UID: "128b30ee-b173-421c-b871-12b2e4fb25b7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.999336 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:27 crc kubenswrapper[4881]: I0126 13:01:27.999391 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.000390 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/128b30ee-b173-421c-b871-12b2e4fb25b7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.000419 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjxvj\" (UniqueName: \"kubernetes.io/projected/128b30ee-b173-421c-b871-12b2e4fb25b7-kube-api-access-kjxvj\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.058768 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-config-data" (OuterVolumeSpecName: "config-data") pod "128b30ee-b173-421c-b871-12b2e4fb25b7" (UID: "128b30ee-b173-421c-b871-12b2e4fb25b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.066489 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "128b30ee-b173-421c-b871-12b2e4fb25b7" (UID: "128b30ee-b173-421c-b871-12b2e4fb25b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.104673 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.105026 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128b30ee-b173-421c-b871-12b2e4fb25b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.161584 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.174445 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.198813 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:28 crc kubenswrapper[4881]: E0126 13:01:28.199228 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="proxy-httpd" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.199246 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="proxy-httpd" Jan 26 13:01:28 crc kubenswrapper[4881]: E0126 13:01:28.199281 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="ceilometer-notification-agent" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.199311 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="ceilometer-notification-agent" Jan 26 13:01:28 crc kubenswrapper[4881]: E0126 13:01:28.199330 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="ceilometer-central-agent" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.199335 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="ceilometer-central-agent" Jan 26 13:01:28 crc kubenswrapper[4881]: E0126 13:01:28.199344 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="sg-core" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.199350 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="sg-core" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.199582 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="ceilometer-notification-agent" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.199604 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="proxy-httpd" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.199613 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="sg-core" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.199642 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" containerName="ceilometer-central-agent" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.201587 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.204270 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.204804 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.206163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-run-httpd\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.206402 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.206573 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g758\" (UniqueName: \"kubernetes.io/projected/23de4f49-5164-41a9-a94c-ef0199801723-kube-api-access-6g758\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.206717 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-scripts\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.206879 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-log-httpd\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.207052 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.207192 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-config-data\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.207710 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.311903 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-run-httpd\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.311995 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.312028 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g758\" (UniqueName: \"kubernetes.io/projected/23de4f49-5164-41a9-a94c-ef0199801723-kube-api-access-6g758\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.312053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-scripts\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.312085 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-log-httpd\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.312123 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.312142 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-config-data\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.313103 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-log-httpd\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.313101 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-run-httpd\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.318344 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.318408 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.318484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-config-data\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.318643 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-scripts\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.337401 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g758\" (UniqueName: \"kubernetes.io/projected/23de4f49-5164-41a9-a94c-ef0199801723-kube-api-access-6g758\") pod \"ceilometer-0\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " pod="openstack/ceilometer-0" Jan 26 13:01:28 crc kubenswrapper[4881]: I0126 13:01:28.518366 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:01:30 crc kubenswrapper[4881]: I0126 13:01:30.094870 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="128b30ee-b173-421c-b871-12b2e4fb25b7" path="/var/lib/kubelet/pods/128b30ee-b173-421c-b871-12b2e4fb25b7/volumes" Jan 26 13:01:35 crc kubenswrapper[4881]: I0126 13:01:35.379391 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:01:35 crc kubenswrapper[4881]: I0126 13:01:35.897224 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vbjb6" event={"ID":"6e0ba53f-5583-413c-bd4d-beb5b5c803ea","Type":"ContainerStarted","Data":"4558be3a5c19909c4d4ff23b5349f75d511a9f14407815e6423933f0f927d7bd"} Jan 26 13:01:35 crc kubenswrapper[4881]: I0126 13:01:35.900441 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23de4f49-5164-41a9-a94c-ef0199801723","Type":"ContainerStarted","Data":"0952eb3c0672ad1a35c083736755f5b8e1080e6579cd5490612199d5f6c92cba"} Jan 26 13:01:35 crc kubenswrapper[4881]: I0126 13:01:35.900502 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23de4f49-5164-41a9-a94c-ef0199801723","Type":"ContainerStarted","Data":"2dcdd67a7b4d835c9e7d4eebe95d7d63ecdf9ca2186c8780457ce3f4c87b631c"} Jan 26 13:01:35 crc kubenswrapper[4881]: I0126 13:01:35.920657 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vbjb6" podStartSLOduration=1.819656451 podStartE2EDuration="11.920634884s" podCreationTimestamp="2026-01-26 13:01:24 +0000 UTC" firstStartedPulling="2026-01-26 13:01:24.852870388 +0000 UTC m=+1557.332180424" lastFinishedPulling="2026-01-26 13:01:34.953848831 +0000 UTC m=+1567.433158857" observedRunningTime="2026-01-26 13:01:35.911905042 +0000 UTC m=+1568.391215088" watchObservedRunningTime="2026-01-26 13:01:35.920634884 +0000 UTC m=+1568.399944920" Jan 26 13:01:36 crc kubenswrapper[4881]: I0126 13:01:36.910971 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23de4f49-5164-41a9-a94c-ef0199801723","Type":"ContainerStarted","Data":"64dd634f6e163e4b22c8f796e5145fac6385f5e81fb0b90f079fa8d1159330e5"} Jan 26 13:01:36 crc kubenswrapper[4881]: I0126 13:01:36.911494 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23de4f49-5164-41a9-a94c-ef0199801723","Type":"ContainerStarted","Data":"8bbcf32208c854f415247cab060da9b73166ce2a8e8638723fbc704f1897300c"} Jan 26 13:01:38 crc kubenswrapper[4881]: I0126 13:01:38.959122 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23de4f49-5164-41a9-a94c-ef0199801723","Type":"ContainerStarted","Data":"67730dfc763196cefd269986c9f1291193f57a2bea98c3ee23af056f25d9db8c"} Jan 26 13:01:38 crc kubenswrapper[4881]: I0126 13:01:38.959725 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:01:39 crc kubenswrapper[4881]: I0126 13:01:39.006434 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.536441705 podStartE2EDuration="11.006412642s" podCreationTimestamp="2026-01-26 13:01:28 +0000 UTC" firstStartedPulling="2026-01-26 13:01:35.395650658 +0000 UTC m=+1567.874960684" lastFinishedPulling="2026-01-26 13:01:37.865621595 +0000 UTC m=+1570.344931621" observedRunningTime="2026-01-26 13:01:38.988220123 +0000 UTC m=+1571.467530159" watchObservedRunningTime="2026-01-26 13:01:39.006412642 +0000 UTC m=+1571.485722668" Jan 26 13:01:47 crc kubenswrapper[4881]: I0126 13:01:47.052990 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e0ba53f-5583-413c-bd4d-beb5b5c803ea" containerID="4558be3a5c19909c4d4ff23b5349f75d511a9f14407815e6423933f0f927d7bd" exitCode=0 Jan 26 13:01:47 crc kubenswrapper[4881]: I0126 13:01:47.053073 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vbjb6" event={"ID":"6e0ba53f-5583-413c-bd4d-beb5b5c803ea","Type":"ContainerDied","Data":"4558be3a5c19909c4d4ff23b5349f75d511a9f14407815e6423933f0f927d7bd"} Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.429016 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.523128 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs4kg\" (UniqueName: \"kubernetes.io/projected/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-kube-api-access-hs4kg\") pod \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.523201 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-combined-ca-bundle\") pod \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.523314 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-config-data\") pod \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.523338 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-scripts\") pod \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\" (UID: \"6e0ba53f-5583-413c-bd4d-beb5b5c803ea\") " Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.539255 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-kube-api-access-hs4kg" (OuterVolumeSpecName: "kube-api-access-hs4kg") pod "6e0ba53f-5583-413c-bd4d-beb5b5c803ea" (UID: "6e0ba53f-5583-413c-bd4d-beb5b5c803ea"). InnerVolumeSpecName "kube-api-access-hs4kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.539363 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-scripts" (OuterVolumeSpecName: "scripts") pod "6e0ba53f-5583-413c-bd4d-beb5b5c803ea" (UID: "6e0ba53f-5583-413c-bd4d-beb5b5c803ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.555005 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-config-data" (OuterVolumeSpecName: "config-data") pod "6e0ba53f-5583-413c-bd4d-beb5b5c803ea" (UID: "6e0ba53f-5583-413c-bd4d-beb5b5c803ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.566250 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e0ba53f-5583-413c-bd4d-beb5b5c803ea" (UID: "6e0ba53f-5583-413c-bd4d-beb5b5c803ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.625428 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs4kg\" (UniqueName: \"kubernetes.io/projected/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-kube-api-access-hs4kg\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.625455 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.625465 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:48 crc kubenswrapper[4881]: I0126 13:01:48.625474 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0ba53f-5583-413c-bd4d-beb5b5c803ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.074602 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vbjb6" event={"ID":"6e0ba53f-5583-413c-bd4d-beb5b5c803ea","Type":"ContainerDied","Data":"e01c3a4ab842acdb87bdb347c8f46d6834d03fa0b8d211ab089641527e466d57"} Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.075014 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e01c3a4ab842acdb87bdb347c8f46d6834d03fa0b8d211ab089641527e466d57" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.074845 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vbjb6" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.165001 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 13:01:49 crc kubenswrapper[4881]: E0126 13:01:49.165365 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0ba53f-5583-413c-bd4d-beb5b5c803ea" containerName="nova-cell0-conductor-db-sync" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.165379 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0ba53f-5583-413c-bd4d-beb5b5c803ea" containerName="nova-cell0-conductor-db-sync" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.165593 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e0ba53f-5583-413c-bd4d-beb5b5c803ea" containerName="nova-cell0-conductor-db-sync" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.166242 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.168294 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.173108 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qlk79" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.179043 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.338560 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e24417-da70-449b-847d-3a0f1516ac9f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"52e24417-da70-449b-847d-3a0f1516ac9f\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.338716 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e24417-da70-449b-847d-3a0f1516ac9f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"52e24417-da70-449b-847d-3a0f1516ac9f\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.339046 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf2mv\" (UniqueName: \"kubernetes.io/projected/52e24417-da70-449b-847d-3a0f1516ac9f-kube-api-access-jf2mv\") pod \"nova-cell0-conductor-0\" (UID: \"52e24417-da70-449b-847d-3a0f1516ac9f\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.440714 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf2mv\" (UniqueName: \"kubernetes.io/projected/52e24417-da70-449b-847d-3a0f1516ac9f-kube-api-access-jf2mv\") pod \"nova-cell0-conductor-0\" (UID: \"52e24417-da70-449b-847d-3a0f1516ac9f\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.440825 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e24417-da70-449b-847d-3a0f1516ac9f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"52e24417-da70-449b-847d-3a0f1516ac9f\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.440918 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e24417-da70-449b-847d-3a0f1516ac9f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"52e24417-da70-449b-847d-3a0f1516ac9f\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.447074 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e24417-da70-449b-847d-3a0f1516ac9f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"52e24417-da70-449b-847d-3a0f1516ac9f\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.448280 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e24417-da70-449b-847d-3a0f1516ac9f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"52e24417-da70-449b-847d-3a0f1516ac9f\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.467182 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf2mv\" (UniqueName: \"kubernetes.io/projected/52e24417-da70-449b-847d-3a0f1516ac9f-kube-api-access-jf2mv\") pod \"nova-cell0-conductor-0\" (UID: \"52e24417-da70-449b-847d-3a0f1516ac9f\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.488541 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:49 crc kubenswrapper[4881]: I0126 13:01:49.987421 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.094632 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"52e24417-da70-449b-847d-3a0f1516ac9f","Type":"ContainerStarted","Data":"fce3bf48f24e7bb74c69d574cf1916025f966d2f410dfae9bf132bf8dc35bf36"} Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.705451 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-26k9v"] Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.707737 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.729736 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-26k9v"] Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.869640 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-catalog-content\") pod \"community-operators-26k9v\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.869721 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfxs5\" (UniqueName: \"kubernetes.io/projected/5a98f888-9933-47bb-9955-b5a7f32e9c82-kube-api-access-qfxs5\") pod \"community-operators-26k9v\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.869832 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-utilities\") pod \"community-operators-26k9v\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.971325 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfxs5\" (UniqueName: \"kubernetes.io/projected/5a98f888-9933-47bb-9955-b5a7f32e9c82-kube-api-access-qfxs5\") pod \"community-operators-26k9v\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.971458 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-utilities\") pod \"community-operators-26k9v\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.971506 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-catalog-content\") pod \"community-operators-26k9v\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.972033 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-catalog-content\") pod \"community-operators-26k9v\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.972060 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-utilities\") pod \"community-operators-26k9v\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:50 crc kubenswrapper[4881]: I0126 13:01:50.993590 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfxs5\" (UniqueName: \"kubernetes.io/projected/5a98f888-9933-47bb-9955-b5a7f32e9c82-kube-api-access-qfxs5\") pod \"community-operators-26k9v\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:51 crc kubenswrapper[4881]: I0126 13:01:51.085753 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:01:51 crc kubenswrapper[4881]: I0126 13:01:51.102904 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"52e24417-da70-449b-847d-3a0f1516ac9f","Type":"ContainerStarted","Data":"2d5290c74517e6bb1d39a6d18fb87e4641b1477b2bdbe74190f823100c4e851f"} Jan 26 13:01:51 crc kubenswrapper[4881]: I0126 13:01:51.104071 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 13:01:51 crc kubenswrapper[4881]: I0126 13:01:51.130240 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.130226498 podStartE2EDuration="2.130226498s" podCreationTimestamp="2026-01-26 13:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:01:51.123136887 +0000 UTC m=+1583.602446913" watchObservedRunningTime="2026-01-26 13:01:51.130226498 +0000 UTC m=+1583.609536524" Jan 26 13:01:51 crc kubenswrapper[4881]: I0126 13:01:51.599098 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-26k9v"] Jan 26 13:01:52 crc kubenswrapper[4881]: I0126 13:01:52.117055 4881 generic.go:334] "Generic (PLEG): container finished" podID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerID="ec69ac758526093c24556dac70a7278ee74852b7bd3b7d6f225d7c56e66c23ac" exitCode=0 Jan 26 13:01:52 crc kubenswrapper[4881]: I0126 13:01:52.117112 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26k9v" event={"ID":"5a98f888-9933-47bb-9955-b5a7f32e9c82","Type":"ContainerDied","Data":"ec69ac758526093c24556dac70a7278ee74852b7bd3b7d6f225d7c56e66c23ac"} Jan 26 13:01:52 crc kubenswrapper[4881]: I0126 13:01:52.118478 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26k9v" event={"ID":"5a98f888-9933-47bb-9955-b5a7f32e9c82","Type":"ContainerStarted","Data":"bdab6c13fe099a9010f8348ba6774ba3aee86b80dac8e6aff2256897fe7034ee"} Jan 26 13:01:54 crc kubenswrapper[4881]: I0126 13:01:54.143158 4881 generic.go:334] "Generic (PLEG): container finished" podID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerID="a1ae0b285eae7a18856feb8cf499a6c28eb0011e703b3121a965d368cd2dd321" exitCode=0 Jan 26 13:01:54 crc kubenswrapper[4881]: I0126 13:01:54.143244 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26k9v" event={"ID":"5a98f888-9933-47bb-9955-b5a7f32e9c82","Type":"ContainerDied","Data":"a1ae0b285eae7a18856feb8cf499a6c28eb0011e703b3121a965d368cd2dd321"} Jan 26 13:01:56 crc kubenswrapper[4881]: I0126 13:01:56.170881 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26k9v" event={"ID":"5a98f888-9933-47bb-9955-b5a7f32e9c82","Type":"ContainerStarted","Data":"15f1ee63ea9efd594d7b80d7c887ad927d4cf8000fe201aa6d7a6dfcce0730d7"} Jan 26 13:01:56 crc kubenswrapper[4881]: I0126 13:01:56.196952 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-26k9v" podStartSLOduration=3.538403173 podStartE2EDuration="6.196923627s" podCreationTimestamp="2026-01-26 13:01:50 +0000 UTC" firstStartedPulling="2026-01-26 13:01:52.119475684 +0000 UTC m=+1584.598785710" lastFinishedPulling="2026-01-26 13:01:54.777996098 +0000 UTC m=+1587.257306164" observedRunningTime="2026-01-26 13:01:56.1953911 +0000 UTC m=+1588.674701166" watchObservedRunningTime="2026-01-26 13:01:56.196923627 +0000 UTC m=+1588.676233693" Jan 26 13:01:58 crc kubenswrapper[4881]: I0126 13:01:58.525240 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 13:01:59 crc kubenswrapper[4881]: I0126 13:01:59.520733 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.170157 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-nzdjt"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.172068 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.174702 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.175459 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.180802 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nzdjt"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.257773 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.257851 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjgf\" (UniqueName: \"kubernetes.io/projected/36c2d906-18af-4025-ac9d-b142b34586f3-kube-api-access-9vjgf\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.257949 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-scripts\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.257989 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-config-data\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.360624 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.360688 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjgf\" (UniqueName: \"kubernetes.io/projected/36c2d906-18af-4025-ac9d-b142b34586f3-kube-api-access-9vjgf\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.360754 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-scripts\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.360781 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-config-data\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.367549 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-config-data\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.383111 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.406081 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-scripts\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.423255 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjgf\" (UniqueName: \"kubernetes.io/projected/36c2d906-18af-4025-ac9d-b142b34586f3-kube-api-access-9vjgf\") pod \"nova-cell0-cell-mapping-nzdjt\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.451151 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.452392 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.459927 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.485151 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.508324 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.565069 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.565215 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx2s7\" (UniqueName: \"kubernetes.io/projected/046a94f5-8350-4db1-9bc4-565b40b3c9bc-kube-api-access-jx2s7\") pod \"nova-cell1-novncproxy-0\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.565272 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.621688 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.622919 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.632291 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.672027 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.672086 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.672196 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx2s7\" (UniqueName: \"kubernetes.io/projected/046a94f5-8350-4db1-9bc4-565b40b3c9bc-kube-api-access-jx2s7\") pod \"nova-cell1-novncproxy-0\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.678647 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.705953 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.707116 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx2s7\" (UniqueName: \"kubernetes.io/projected/046a94f5-8350-4db1-9bc4-565b40b3c9bc-kube-api-access-jx2s7\") pod \"nova-cell1-novncproxy-0\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.714387 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.737385 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.739099 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.755690 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.773558 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfkd8\" (UniqueName: \"kubernetes.io/projected/b5a078a6-15d0-4f45-a167-d5ec218210ef-kube-api-access-cfkd8\") pod \"nova-scheduler-0\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.773626 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-config-data\") pod \"nova-scheduler-0\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.773693 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.812434 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.822585 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.824241 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.831712 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7777964479-66ccm"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.832835 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.833278 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.833779 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.840677 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.883657 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b08f600-8f58-4e91-8f6f-46628a6030c9-logs\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.883712 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.883743 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.883931 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgl2q\" (UniqueName: \"kubernetes.io/projected/8b08f600-8f58-4e91-8f6f-46628a6030c9-kube-api-access-xgl2q\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.884078 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfkd8\" (UniqueName: \"kubernetes.io/projected/b5a078a6-15d0-4f45-a167-d5ec218210ef-kube-api-access-cfkd8\") pod \"nova-scheduler-0\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.884147 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-config-data\") pod \"nova-scheduler-0\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.884236 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-config-data\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.887641 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7777964479-66ccm"] Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.894561 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.907841 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfkd8\" (UniqueName: \"kubernetes.io/projected/b5a078a6-15d0-4f45-a167-d5ec218210ef-kube-api-access-cfkd8\") pod \"nova-scheduler-0\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.922096 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-config-data\") pod \"nova-scheduler-0\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.998858 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87t7\" (UniqueName: \"kubernetes.io/projected/ff1ef10b-9838-42f7-a2d9-b4907440336c-kube-api-access-w87t7\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.998905 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-config-data\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.998981 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-svc\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.998997 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xn7g\" (UniqueName: \"kubernetes.io/projected/24649036-906f-4ba5-a838-aa36ccee3760-kube-api-access-2xn7g\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.999053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgl2q\" (UniqueName: \"kubernetes.io/projected/8b08f600-8f58-4e91-8f6f-46628a6030c9-kube-api-access-xgl2q\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.999121 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-nb\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.999139 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-swift-storage-0\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.999264 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-config-data\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.999296 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-config\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.999343 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.999377 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b08f600-8f58-4e91-8f6f-46628a6030c9-logs\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.999395 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.999435 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-sb\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:00 crc kubenswrapper[4881]: I0126 13:02:00.999460 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff1ef10b-9838-42f7-a2d9-b4907440336c-logs\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.002050 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b08f600-8f58-4e91-8f6f-46628a6030c9-logs\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.009206 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.012135 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-config-data\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.017830 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgl2q\" (UniqueName: \"kubernetes.io/projected/8b08f600-8f58-4e91-8f6f-46628a6030c9-kube-api-access-xgl2q\") pod \"nova-metadata-0\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " pod="openstack/nova-metadata-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.060646 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.087579 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.087614 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.099966 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.106628 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-config\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.106677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.106714 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-sb\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.106737 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff1ef10b-9838-42f7-a2d9-b4907440336c-logs\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.106773 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w87t7\" (UniqueName: \"kubernetes.io/projected/ff1ef10b-9838-42f7-a2d9-b4907440336c-kube-api-access-w87t7\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.106788 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-config-data\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.106811 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-svc\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.106828 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xn7g\" (UniqueName: \"kubernetes.io/projected/24649036-906f-4ba5-a838-aa36ccee3760-kube-api-access-2xn7g\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.107708 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-svc\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.107717 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-sb\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.107827 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-nb\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.107845 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-swift-storage-0\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.116047 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff1ef10b-9838-42f7-a2d9-b4907440336c-logs\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.116392 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-nb\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.117084 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-config-data\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.117610 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-swift-storage-0\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.132805 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-config\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.141186 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.145685 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w87t7\" (UniqueName: \"kubernetes.io/projected/ff1ef10b-9838-42f7-a2d9-b4907440336c-kube-api-access-w87t7\") pod \"nova-api-0\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " pod="openstack/nova-api-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.146353 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xn7g\" (UniqueName: \"kubernetes.io/projected/24649036-906f-4ba5-a838-aa36ccee3760-kube-api-access-2xn7g\") pod \"dnsmasq-dns-7777964479-66ccm\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.174923 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.189847 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.191812 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.266887 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.300847 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nzdjt"] Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.411051 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.495887 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-26k9v"] Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.806923 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.899763 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.913274 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7777964479-66ccm"] Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.922949 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.962710 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bhqzs"] Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.963976 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.966103 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.969804 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 13:02:01 crc kubenswrapper[4881]: I0126 13:02:01.972953 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bhqzs"] Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.040109 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.040672 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-config-data\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.040847 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qxw7\" (UniqueName: \"kubernetes.io/projected/07b4140b-bd1d-4295-961e-99d18c4406c3-kube-api-access-7qxw7\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.040975 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-scripts\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.142492 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-scripts\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.142854 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.143663 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-config-data\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.143988 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qxw7\" (UniqueName: \"kubernetes.io/projected/07b4140b-bd1d-4295-961e-99d18c4406c3-kube-api-access-7qxw7\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.148640 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.149497 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-config-data\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.151853 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-scripts\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.162092 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qxw7\" (UniqueName: \"kubernetes.io/projected/07b4140b-bd1d-4295-961e-99d18c4406c3-kube-api-access-7qxw7\") pod \"nova-cell1-conductor-db-sync-bhqzs\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.287563 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.370200 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nzdjt" event={"ID":"36c2d906-18af-4025-ac9d-b142b34586f3","Type":"ContainerStarted","Data":"38f68b0a65c4307752767c55853fb1a2715f331edc183bd70b07536a1159e5b4"} Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.370254 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nzdjt" event={"ID":"36c2d906-18af-4025-ac9d-b142b34586f3","Type":"ContainerStarted","Data":"05ffddf5a46c5d733e2ba59dbf0c6446af10151ba8e2680195840c357e1c84c4"} Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.373235 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff1ef10b-9838-42f7-a2d9-b4907440336c","Type":"ContainerStarted","Data":"3dbe7cde860cee54d1a360c4b339fcf2486b6f01aeac0d3498690af5b3192c1e"} Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.375195 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b5a078a6-15d0-4f45-a167-d5ec218210ef","Type":"ContainerStarted","Data":"93bedb424afef600b8f6572bf0e29d1135bf7949e595fcd2939a8a24d957a689"} Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.388199 4881 generic.go:334] "Generic (PLEG): container finished" podID="24649036-906f-4ba5-a838-aa36ccee3760" containerID="ed3bd61a776ae3733b6bef4de3b7839268171c097cb295b181919f0a43c4def8" exitCode=0 Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.388271 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-66ccm" event={"ID":"24649036-906f-4ba5-a838-aa36ccee3760","Type":"ContainerDied","Data":"ed3bd61a776ae3733b6bef4de3b7839268171c097cb295b181919f0a43c4def8"} Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.388299 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-66ccm" event={"ID":"24649036-906f-4ba5-a838-aa36ccee3760","Type":"ContainerStarted","Data":"8c5d391689121d96680b651348f63a74b6069d2bf85f8d8db5693a05935736fd"} Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.396816 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-nzdjt" podStartSLOduration=2.396799649 podStartE2EDuration="2.396799649s" podCreationTimestamp="2026-01-26 13:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:02.394018642 +0000 UTC m=+1594.873328668" watchObservedRunningTime="2026-01-26 13:02:02.396799649 +0000 UTC m=+1594.876109665" Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.398017 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b08f600-8f58-4e91-8f6f-46628a6030c9","Type":"ContainerStarted","Data":"08358dadf5ddabbcb93e6dfd259dcc66ea5c4bc142b9dda41ed57e3ba79c95a3"} Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.431796 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"046a94f5-8350-4db1-9bc4-565b40b3c9bc","Type":"ContainerStarted","Data":"f0184e0df5bac0ae34ee0a69572218666674cc84c16c2d8a3b519aea5fa4b546"} Jan 26 13:02:02 crc kubenswrapper[4881]: I0126 13:02:02.951393 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bhqzs"] Jan 26 13:02:03 crc kubenswrapper[4881]: I0126 13:02:03.451030 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-66ccm" event={"ID":"24649036-906f-4ba5-a838-aa36ccee3760","Type":"ContainerStarted","Data":"507be02147fbf816d9a1b9df90f58821dfb79a7cbbfe328b574f08bb0deea846"} Jan 26 13:02:03 crc kubenswrapper[4881]: I0126 13:02:03.451571 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-26k9v" podUID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerName="registry-server" containerID="cri-o://15f1ee63ea9efd594d7b80d7c887ad927d4cf8000fe201aa6d7a6dfcce0730d7" gracePeriod=2 Jan 26 13:02:03 crc kubenswrapper[4881]: I0126 13:02:03.480160 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7777964479-66ccm" podStartSLOduration=3.480138678 podStartE2EDuration="3.480138678s" podCreationTimestamp="2026-01-26 13:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:03.468562659 +0000 UTC m=+1595.947872705" watchObservedRunningTime="2026-01-26 13:02:03.480138678 +0000 UTC m=+1595.959448704" Jan 26 13:02:04 crc kubenswrapper[4881]: I0126 13:02:04.000997 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:02:04 crc kubenswrapper[4881]: I0126 13:02:04.063062 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:04 crc kubenswrapper[4881]: I0126 13:02:04.464408 4881 generic.go:334] "Generic (PLEG): container finished" podID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerID="15f1ee63ea9efd594d7b80d7c887ad927d4cf8000fe201aa6d7a6dfcce0730d7" exitCode=0 Jan 26 13:02:04 crc kubenswrapper[4881]: I0126 13:02:04.464512 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26k9v" event={"ID":"5a98f888-9933-47bb-9955-b5a7f32e9c82","Type":"ContainerDied","Data":"15f1ee63ea9efd594d7b80d7c887ad927d4cf8000fe201aa6d7a6dfcce0730d7"} Jan 26 13:02:04 crc kubenswrapper[4881]: I0126 13:02:04.464754 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.413182 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.474866 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bhqzs" event={"ID":"07b4140b-bd1d-4295-961e-99d18c4406c3","Type":"ContainerStarted","Data":"e9fa003d7b129504c664a0a5e7b556a6c73866dfafca28815ecbe43032fbb9ed"} Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.477175 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26k9v" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.477215 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26k9v" event={"ID":"5a98f888-9933-47bb-9955-b5a7f32e9c82","Type":"ContainerDied","Data":"bdab6c13fe099a9010f8348ba6774ba3aee86b80dac8e6aff2256897fe7034ee"} Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.477249 4881 scope.go:117] "RemoveContainer" containerID="15f1ee63ea9efd594d7b80d7c887ad927d4cf8000fe201aa6d7a6dfcce0730d7" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.518246 4881 scope.go:117] "RemoveContainer" containerID="a1ae0b285eae7a18856feb8cf499a6c28eb0011e703b3121a965d368cd2dd321" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.529238 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-utilities\") pod \"5a98f888-9933-47bb-9955-b5a7f32e9c82\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.529382 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfxs5\" (UniqueName: \"kubernetes.io/projected/5a98f888-9933-47bb-9955-b5a7f32e9c82-kube-api-access-qfxs5\") pod \"5a98f888-9933-47bb-9955-b5a7f32e9c82\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.529568 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-catalog-content\") pod \"5a98f888-9933-47bb-9955-b5a7f32e9c82\" (UID: \"5a98f888-9933-47bb-9955-b5a7f32e9c82\") " Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.535834 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-utilities" (OuterVolumeSpecName: "utilities") pod "5a98f888-9933-47bb-9955-b5a7f32e9c82" (UID: "5a98f888-9933-47bb-9955-b5a7f32e9c82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.547672 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a98f888-9933-47bb-9955-b5a7f32e9c82-kube-api-access-qfxs5" (OuterVolumeSpecName: "kube-api-access-qfxs5") pod "5a98f888-9933-47bb-9955-b5a7f32e9c82" (UID: "5a98f888-9933-47bb-9955-b5a7f32e9c82"). InnerVolumeSpecName "kube-api-access-qfxs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.552822 4881 scope.go:117] "RemoveContainer" containerID="ec69ac758526093c24556dac70a7278ee74852b7bd3b7d6f225d7c56e66c23ac" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.609863 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a98f888-9933-47bb-9955-b5a7f32e9c82" (UID: "5a98f888-9933-47bb-9955-b5a7f32e9c82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.633746 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.633786 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a98f888-9933-47bb-9955-b5a7f32e9c82-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.633796 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfxs5\" (UniqueName: \"kubernetes.io/projected/5a98f888-9933-47bb-9955-b5a7f32e9c82-kube-api-access-qfxs5\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.846112 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-26k9v"] Jan 26 13:02:05 crc kubenswrapper[4881]: I0126 13:02:05.854692 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-26k9v"] Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.095389 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a98f888-9933-47bb-9955-b5a7f32e9c82" path="/var/lib/kubelet/pods/5a98f888-9933-47bb-9955-b5a7f32e9c82/volumes" Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.487843 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff1ef10b-9838-42f7-a2d9-b4907440336c","Type":"ContainerStarted","Data":"50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc"} Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.487882 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff1ef10b-9838-42f7-a2d9-b4907440336c","Type":"ContainerStarted","Data":"5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094"} Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.490991 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b5a078a6-15d0-4f45-a167-d5ec218210ef","Type":"ContainerStarted","Data":"6e20e0735b9a019110b29ce9fa132c0a3f7c3f92e8258c25d98729163dcfaa47"} Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.493380 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b08f600-8f58-4e91-8f6f-46628a6030c9","Type":"ContainerStarted","Data":"91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce"} Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.493414 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b08f600-8f58-4e91-8f6f-46628a6030c9","Type":"ContainerStarted","Data":"a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9"} Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.493469 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b08f600-8f58-4e91-8f6f-46628a6030c9" containerName="nova-metadata-log" containerID="cri-o://a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9" gracePeriod=30 Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.493481 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b08f600-8f58-4e91-8f6f-46628a6030c9" containerName="nova-metadata-metadata" containerID="cri-o://91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce" gracePeriod=30 Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.496982 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"046a94f5-8350-4db1-9bc4-565b40b3c9bc","Type":"ContainerStarted","Data":"fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b"} Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.497071 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="046a94f5-8350-4db1-9bc4-565b40b3c9bc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b" gracePeriod=30 Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.502757 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bhqzs" event={"ID":"07b4140b-bd1d-4295-961e-99d18c4406c3","Type":"ContainerStarted","Data":"094fe90e94e3e35810d16d888a4028224036b85f76f85e1088a1f49e9407b9b4"} Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.522841 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.176068261 podStartE2EDuration="6.522822186s" podCreationTimestamp="2026-01-26 13:02:00 +0000 UTC" firstStartedPulling="2026-01-26 13:02:01.953815604 +0000 UTC m=+1594.433125630" lastFinishedPulling="2026-01-26 13:02:05.300569529 +0000 UTC m=+1597.779879555" observedRunningTime="2026-01-26 13:02:06.503667522 +0000 UTC m=+1598.982977548" watchObservedRunningTime="2026-01-26 13:02:06.522822186 +0000 UTC m=+1599.002132222" Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.528147 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.578155302 podStartE2EDuration="6.528130024s" podCreationTimestamp="2026-01-26 13:02:00 +0000 UTC" firstStartedPulling="2026-01-26 13:02:01.350485545 +0000 UTC m=+1593.829795571" lastFinishedPulling="2026-01-26 13:02:05.300460267 +0000 UTC m=+1597.779770293" observedRunningTime="2026-01-26 13:02:06.520768076 +0000 UTC m=+1599.000078102" watchObservedRunningTime="2026-01-26 13:02:06.528130024 +0000 UTC m=+1599.007440050" Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.554138 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.09292102 podStartE2EDuration="6.554121002s" podCreationTimestamp="2026-01-26 13:02:00 +0000 UTC" firstStartedPulling="2026-01-26 13:02:01.839989033 +0000 UTC m=+1594.319299059" lastFinishedPulling="2026-01-26 13:02:05.301189015 +0000 UTC m=+1597.780499041" observedRunningTime="2026-01-26 13:02:06.541833915 +0000 UTC m=+1599.021143961" watchObservedRunningTime="2026-01-26 13:02:06.554121002 +0000 UTC m=+1599.033431028" Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.584192 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bhqzs" podStartSLOduration=5.584170578 podStartE2EDuration="5.584170578s" podCreationTimestamp="2026-01-26 13:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:06.57678425 +0000 UTC m=+1599.056094276" watchObservedRunningTime="2026-01-26 13:02:06.584170578 +0000 UTC m=+1599.063480604" Jan 26 13:02:06 crc kubenswrapper[4881]: I0126 13:02:06.589758 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.21949878 podStartE2EDuration="6.589724622s" podCreationTimestamp="2026-01-26 13:02:00 +0000 UTC" firstStartedPulling="2026-01-26 13:02:01.928894402 +0000 UTC m=+1594.408204428" lastFinishedPulling="2026-01-26 13:02:05.299120244 +0000 UTC m=+1597.778430270" observedRunningTime="2026-01-26 13:02:06.562802112 +0000 UTC m=+1599.042112138" watchObservedRunningTime="2026-01-26 13:02:06.589724622 +0000 UTC m=+1599.069034668" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.103045 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.106060 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.106223 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="95617a83-815e-4e5d-9b7e-4d3bec591ed8" containerName="kube-state-metrics" containerID="cri-o://05e1491045594badb50eafa8562c718ffce59354825d77889a238578d67e8b72" gracePeriod=30 Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.165821 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgl2q\" (UniqueName: \"kubernetes.io/projected/8b08f600-8f58-4e91-8f6f-46628a6030c9-kube-api-access-xgl2q\") pod \"8b08f600-8f58-4e91-8f6f-46628a6030c9\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.165971 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b08f600-8f58-4e91-8f6f-46628a6030c9-logs\") pod \"8b08f600-8f58-4e91-8f6f-46628a6030c9\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.165989 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-config-data\") pod \"8b08f600-8f58-4e91-8f6f-46628a6030c9\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.166128 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-combined-ca-bundle\") pod \"8b08f600-8f58-4e91-8f6f-46628a6030c9\" (UID: \"8b08f600-8f58-4e91-8f6f-46628a6030c9\") " Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.166354 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b08f600-8f58-4e91-8f6f-46628a6030c9-logs" (OuterVolumeSpecName: "logs") pod "8b08f600-8f58-4e91-8f6f-46628a6030c9" (UID: "8b08f600-8f58-4e91-8f6f-46628a6030c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.167455 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b08f600-8f58-4e91-8f6f-46628a6030c9-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.184142 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b08f600-8f58-4e91-8f6f-46628a6030c9-kube-api-access-xgl2q" (OuterVolumeSpecName: "kube-api-access-xgl2q") pod "8b08f600-8f58-4e91-8f6f-46628a6030c9" (UID: "8b08f600-8f58-4e91-8f6f-46628a6030c9"). InnerVolumeSpecName "kube-api-access-xgl2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.200032 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-config-data" (OuterVolumeSpecName: "config-data") pod "8b08f600-8f58-4e91-8f6f-46628a6030c9" (UID: "8b08f600-8f58-4e91-8f6f-46628a6030c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.237159 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b08f600-8f58-4e91-8f6f-46628a6030c9" (UID: "8b08f600-8f58-4e91-8f6f-46628a6030c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.268813 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.268855 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b08f600-8f58-4e91-8f6f-46628a6030c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.268866 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgl2q\" (UniqueName: \"kubernetes.io/projected/8b08f600-8f58-4e91-8f6f-46628a6030c9-kube-api-access-xgl2q\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.526540 4881 generic.go:334] "Generic (PLEG): container finished" podID="8b08f600-8f58-4e91-8f6f-46628a6030c9" containerID="91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce" exitCode=0 Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.526570 4881 generic.go:334] "Generic (PLEG): container finished" podID="8b08f600-8f58-4e91-8f6f-46628a6030c9" containerID="a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9" exitCode=143 Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.526609 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.526650 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b08f600-8f58-4e91-8f6f-46628a6030c9","Type":"ContainerDied","Data":"91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce"} Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.526677 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b08f600-8f58-4e91-8f6f-46628a6030c9","Type":"ContainerDied","Data":"a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9"} Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.526688 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b08f600-8f58-4e91-8f6f-46628a6030c9","Type":"ContainerDied","Data":"08358dadf5ddabbcb93e6dfd259dcc66ea5c4bc142b9dda41ed57e3ba79c95a3"} Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.526705 4881 scope.go:117] "RemoveContainer" containerID="91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.577613 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.590990 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.607393 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:07 crc kubenswrapper[4881]: E0126 13:02:07.607811 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b08f600-8f58-4e91-8f6f-46628a6030c9" containerName="nova-metadata-log" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.607830 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b08f600-8f58-4e91-8f6f-46628a6030c9" containerName="nova-metadata-log" Jan 26 13:02:07 crc kubenswrapper[4881]: E0126 13:02:07.607845 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerName="extract-content" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.607853 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerName="extract-content" Jan 26 13:02:07 crc kubenswrapper[4881]: E0126 13:02:07.607875 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerName="registry-server" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.607881 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerName="registry-server" Jan 26 13:02:07 crc kubenswrapper[4881]: E0126 13:02:07.607892 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b08f600-8f58-4e91-8f6f-46628a6030c9" containerName="nova-metadata-metadata" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.607898 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b08f600-8f58-4e91-8f6f-46628a6030c9" containerName="nova-metadata-metadata" Jan 26 13:02:07 crc kubenswrapper[4881]: E0126 13:02:07.607906 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerName="extract-utilities" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.607912 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerName="extract-utilities" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.608135 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a98f888-9933-47bb-9955-b5a7f32e9c82" containerName="registry-server" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.608149 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b08f600-8f58-4e91-8f6f-46628a6030c9" containerName="nova-metadata-log" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.608181 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b08f600-8f58-4e91-8f6f-46628a6030c9" containerName="nova-metadata-metadata" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.609152 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.609966 4881 scope.go:117] "RemoveContainer" containerID="a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.611776 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.612009 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.629379 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.676475 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0abb3e29-3399-43d4-86bc-f36f23b3f682-logs\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.676593 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdg6\" (UniqueName: \"kubernetes.io/projected/0abb3e29-3399-43d4-86bc-f36f23b3f682-kube-api-access-rfdg6\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.676716 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.676818 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-config-data\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.676851 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.748429 4881 scope.go:117] "RemoveContainer" containerID="91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce" Jan 26 13:02:07 crc kubenswrapper[4881]: E0126 13:02:07.749374 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce\": container with ID starting with 91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce not found: ID does not exist" containerID="91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.749401 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce"} err="failed to get container status \"91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce\": rpc error: code = NotFound desc = could not find container \"91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce\": container with ID starting with 91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce not found: ID does not exist" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.749422 4881 scope.go:117] "RemoveContainer" containerID="a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9" Jan 26 13:02:07 crc kubenswrapper[4881]: E0126 13:02:07.749719 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9\": container with ID starting with a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9 not found: ID does not exist" containerID="a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.749742 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9"} err="failed to get container status \"a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9\": rpc error: code = NotFound desc = could not find container \"a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9\": container with ID starting with a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9 not found: ID does not exist" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.749754 4881 scope.go:117] "RemoveContainer" containerID="91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.750013 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce"} err="failed to get container status \"91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce\": rpc error: code = NotFound desc = could not find container \"91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce\": container with ID starting with 91b2049dba50431129e7f76e09ad0169b075390d6d87eb88ef6f06af30e6d2ce not found: ID does not exist" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.750031 4881 scope.go:117] "RemoveContainer" containerID="a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.750222 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9"} err="failed to get container status \"a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9\": rpc error: code = NotFound desc = could not find container \"a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9\": container with ID starting with a941323ad9960980dbaec3e6cc91879a1d0ce1bb3b2bb5c46d4f01025aa9cdf9 not found: ID does not exist" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.781000 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.781122 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-config-data\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.781151 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.781211 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0abb3e29-3399-43d4-86bc-f36f23b3f682-logs\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.781256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfdg6\" (UniqueName: \"kubernetes.io/projected/0abb3e29-3399-43d4-86bc-f36f23b3f682-kube-api-access-rfdg6\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.782454 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0abb3e29-3399-43d4-86bc-f36f23b3f682-logs\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.786766 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.790149 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-config-data\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.799084 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfdg6\" (UniqueName: \"kubernetes.io/projected/0abb3e29-3399-43d4-86bc-f36f23b3f682-kube-api-access-rfdg6\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.799574 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " pod="openstack/nova-metadata-0" Jan 26 13:02:07 crc kubenswrapper[4881]: I0126 13:02:07.989804 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:02:08 crc kubenswrapper[4881]: I0126 13:02:08.111247 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b08f600-8f58-4e91-8f6f-46628a6030c9" path="/var/lib/kubelet/pods/8b08f600-8f58-4e91-8f6f-46628a6030c9/volumes" Jan 26 13:02:08 crc kubenswrapper[4881]: I0126 13:02:08.517413 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:08 crc kubenswrapper[4881]: W0126 13:02:08.519894 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0abb3e29_3399_43d4_86bc_f36f23b3f682.slice/crio-5cecc6fdaf063ffaa924c7e7c5ee76d60b6005ea4959995ceb589f42d7e60625 WatchSource:0}: Error finding container 5cecc6fdaf063ffaa924c7e7c5ee76d60b6005ea4959995ceb589f42d7e60625: Status 404 returned error can't find the container with id 5cecc6fdaf063ffaa924c7e7c5ee76d60b6005ea4959995ceb589f42d7e60625 Jan 26 13:02:08 crc kubenswrapper[4881]: I0126 13:02:08.544939 4881 generic.go:334] "Generic (PLEG): container finished" podID="95617a83-815e-4e5d-9b7e-4d3bec591ed8" containerID="05e1491045594badb50eafa8562c718ffce59354825d77889a238578d67e8b72" exitCode=2 Jan 26 13:02:08 crc kubenswrapper[4881]: I0126 13:02:08.545030 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95617a83-815e-4e5d-9b7e-4d3bec591ed8","Type":"ContainerDied","Data":"05e1491045594badb50eafa8562c718ffce59354825d77889a238578d67e8b72"} Jan 26 13:02:08 crc kubenswrapper[4881]: I0126 13:02:08.553335 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0abb3e29-3399-43d4-86bc-f36f23b3f682","Type":"ContainerStarted","Data":"5cecc6fdaf063ffaa924c7e7c5ee76d60b6005ea4959995ceb589f42d7e60625"} Jan 26 13:02:08 crc kubenswrapper[4881]: I0126 13:02:08.627239 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 13:02:08 crc kubenswrapper[4881]: I0126 13:02:08.702592 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzqq4\" (UniqueName: \"kubernetes.io/projected/95617a83-815e-4e5d-9b7e-4d3bec591ed8-kube-api-access-gzqq4\") pod \"95617a83-815e-4e5d-9b7e-4d3bec591ed8\" (UID: \"95617a83-815e-4e5d-9b7e-4d3bec591ed8\") " Jan 26 13:02:08 crc kubenswrapper[4881]: I0126 13:02:08.708016 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95617a83-815e-4e5d-9b7e-4d3bec591ed8-kube-api-access-gzqq4" (OuterVolumeSpecName: "kube-api-access-gzqq4") pod "95617a83-815e-4e5d-9b7e-4d3bec591ed8" (UID: "95617a83-815e-4e5d-9b7e-4d3bec591ed8"). InnerVolumeSpecName "kube-api-access-gzqq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:08 crc kubenswrapper[4881]: I0126 13:02:08.807639 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzqq4\" (UniqueName: \"kubernetes.io/projected/95617a83-815e-4e5d-9b7e-4d3bec591ed8-kube-api-access-gzqq4\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.336017 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.336378 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="ceilometer-central-agent" containerID="cri-o://0952eb3c0672ad1a35c083736755f5b8e1080e6579cd5490612199d5f6c92cba" gracePeriod=30 Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.336446 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="proxy-httpd" containerID="cri-o://67730dfc763196cefd269986c9f1291193f57a2bea98c3ee23af056f25d9db8c" gracePeriod=30 Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.336443 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="sg-core" containerID="cri-o://64dd634f6e163e4b22c8f796e5145fac6385f5e81fb0b90f079fa8d1159330e5" gracePeriod=30 Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.336445 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="ceilometer-notification-agent" containerID="cri-o://8bbcf32208c854f415247cab060da9b73166ce2a8e8638723fbc704f1897300c" gracePeriod=30 Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.562579 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0abb3e29-3399-43d4-86bc-f36f23b3f682","Type":"ContainerStarted","Data":"f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d"} Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.562828 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0abb3e29-3399-43d4-86bc-f36f23b3f682","Type":"ContainerStarted","Data":"a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce"} Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.565330 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95617a83-815e-4e5d-9b7e-4d3bec591ed8","Type":"ContainerDied","Data":"2a31754e82097d923b0f1f774b89d97006f210d9c80c1bd9ed829822497774af"} Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.565481 4881 scope.go:117] "RemoveContainer" containerID="05e1491045594badb50eafa8562c718ffce59354825d77889a238578d67e8b72" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.565355 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.568817 4881 generic.go:334] "Generic (PLEG): container finished" podID="23de4f49-5164-41a9-a94c-ef0199801723" containerID="64dd634f6e163e4b22c8f796e5145fac6385f5e81fb0b90f079fa8d1159330e5" exitCode=2 Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.568864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23de4f49-5164-41a9-a94c-ef0199801723","Type":"ContainerDied","Data":"64dd634f6e163e4b22c8f796e5145fac6385f5e81fb0b90f079fa8d1159330e5"} Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.628923 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.628902375 podStartE2EDuration="2.628902375s" podCreationTimestamp="2026-01-26 13:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:09.613463992 +0000 UTC m=+1602.092774018" watchObservedRunningTime="2026-01-26 13:02:09.628902375 +0000 UTC m=+1602.108212391" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.647837 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.675409 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.689499 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:02:09 crc kubenswrapper[4881]: E0126 13:02:09.690101 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95617a83-815e-4e5d-9b7e-4d3bec591ed8" containerName="kube-state-metrics" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.690201 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="95617a83-815e-4e5d-9b7e-4d3bec591ed8" containerName="kube-state-metrics" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.690485 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="95617a83-815e-4e5d-9b7e-4d3bec591ed8" containerName="kube-state-metrics" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.692052 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.696807 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.696989 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.698144 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.725715 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.725819 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.725984 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.726038 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-222dw\" (UniqueName: \"kubernetes.io/projected/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-kube-api-access-222dw\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.828690 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.828818 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.828862 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.828878 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-222dw\" (UniqueName: \"kubernetes.io/projected/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-kube-api-access-222dw\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.833813 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.833921 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.834739 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:09 crc kubenswrapper[4881]: I0126 13:02:09.849902 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-222dw\" (UniqueName: \"kubernetes.io/projected/a8e06d36-6fd3-40af-8066-f1cbb8d46a16-kube-api-access-222dw\") pod \"kube-state-metrics-0\" (UID: \"a8e06d36-6fd3-40af-8066-f1cbb8d46a16\") " pod="openstack/kube-state-metrics-0" Jan 26 13:02:10 crc kubenswrapper[4881]: I0126 13:02:10.017406 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 13:02:10 crc kubenswrapper[4881]: I0126 13:02:10.100432 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95617a83-815e-4e5d-9b7e-4d3bec591ed8" path="/var/lib/kubelet/pods/95617a83-815e-4e5d-9b7e-4d3bec591ed8/volumes" Jan 26 13:02:10 crc kubenswrapper[4881]: W0126 13:02:10.534032 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8e06d36_6fd3_40af_8066_f1cbb8d46a16.slice/crio-52c8f9a62c05c842617f967b4af9c9fede87714199b36c47ec18212594a347c3 WatchSource:0}: Error finding container 52c8f9a62c05c842617f967b4af9c9fede87714199b36c47ec18212594a347c3: Status 404 returned error can't find the container with id 52c8f9a62c05c842617f967b4af9c9fede87714199b36c47ec18212594a347c3 Jan 26 13:02:10 crc kubenswrapper[4881]: I0126 13:02:10.545873 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:02:10 crc kubenswrapper[4881]: I0126 13:02:10.585696 4881 generic.go:334] "Generic (PLEG): container finished" podID="23de4f49-5164-41a9-a94c-ef0199801723" containerID="67730dfc763196cefd269986c9f1291193f57a2bea98c3ee23af056f25d9db8c" exitCode=0 Jan 26 13:02:10 crc kubenswrapper[4881]: I0126 13:02:10.585748 4881 generic.go:334] "Generic (PLEG): container finished" podID="23de4f49-5164-41a9-a94c-ef0199801723" containerID="0952eb3c0672ad1a35c083736755f5b8e1080e6579cd5490612199d5f6c92cba" exitCode=0 Jan 26 13:02:10 crc kubenswrapper[4881]: I0126 13:02:10.585837 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23de4f49-5164-41a9-a94c-ef0199801723","Type":"ContainerDied","Data":"67730dfc763196cefd269986c9f1291193f57a2bea98c3ee23af056f25d9db8c"} Jan 26 13:02:10 crc kubenswrapper[4881]: I0126 13:02:10.585867 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23de4f49-5164-41a9-a94c-ef0199801723","Type":"ContainerDied","Data":"0952eb3c0672ad1a35c083736755f5b8e1080e6579cd5490612199d5f6c92cba"} Jan 26 13:02:10 crc kubenswrapper[4881]: I0126 13:02:10.588627 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8e06d36-6fd3-40af-8066-f1cbb8d46a16","Type":"ContainerStarted","Data":"52c8f9a62c05c842617f967b4af9c9fede87714199b36c47ec18212594a347c3"} Jan 26 13:02:10 crc kubenswrapper[4881]: I0126 13:02:10.833437 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.061384 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.061662 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.087401 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.175442 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.175478 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.191654 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.252384 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-pd9zd"] Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.252734 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" podUID="5790d0bd-1660-4c11-af86-e99d9a2aabf8" containerName="dnsmasq-dns" containerID="cri-o://b912e0916cd54d1360e533a0393cf94b47122f9712df5756b72a472f4e47910a" gracePeriod=10 Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.329932 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" podUID="5790d0bd-1660-4c11-af86-e99d9a2aabf8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.181:5353: connect: connection refused" Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.599606 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8e06d36-6fd3-40af-8066-f1cbb8d46a16","Type":"ContainerStarted","Data":"82cf533e0bc90f7409c8b3787b2fb1716ddd4c3b386017fa1d66d00116a34153"} Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.599875 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.601351 4881 generic.go:334] "Generic (PLEG): container finished" podID="36c2d906-18af-4025-ac9d-b142b34586f3" containerID="38f68b0a65c4307752767c55853fb1a2715f331edc183bd70b07536a1159e5b4" exitCode=0 Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.601437 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nzdjt" event={"ID":"36c2d906-18af-4025-ac9d-b142b34586f3","Type":"ContainerDied","Data":"38f68b0a65c4307752767c55853fb1a2715f331edc183bd70b07536a1159e5b4"} Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.604050 4881 generic.go:334] "Generic (PLEG): container finished" podID="5790d0bd-1660-4c11-af86-e99d9a2aabf8" containerID="b912e0916cd54d1360e533a0393cf94b47122f9712df5756b72a472f4e47910a" exitCode=0 Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.604158 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" event={"ID":"5790d0bd-1660-4c11-af86-e99d9a2aabf8","Type":"ContainerDied","Data":"b912e0916cd54d1360e533a0393cf94b47122f9712df5756b72a472f4e47910a"} Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.622569 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.003538623 podStartE2EDuration="2.622551102s" podCreationTimestamp="2026-01-26 13:02:09 +0000 UTC" firstStartedPulling="2026-01-26 13:02:10.541820736 +0000 UTC m=+1603.021130782" lastFinishedPulling="2026-01-26 13:02:11.160833235 +0000 UTC m=+1603.640143261" observedRunningTime="2026-01-26 13:02:11.613076124 +0000 UTC m=+1604.092386150" watchObservedRunningTime="2026-01-26 13:02:11.622551102 +0000 UTC m=+1604.101861128" Jan 26 13:02:11 crc kubenswrapper[4881]: I0126 13:02:11.642682 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.258715 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.258845 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.833470 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.899013 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhzkh\" (UniqueName: \"kubernetes.io/projected/5790d0bd-1660-4c11-af86-e99d9a2aabf8-kube-api-access-dhzkh\") pod \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.899312 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-sb\") pod \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.899361 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-config\") pod \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.899405 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-swift-storage-0\") pod \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.899444 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-svc\") pod \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.899499 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-nb\") pod \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\" (UID: \"5790d0bd-1660-4c11-af86-e99d9a2aabf8\") " Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.914811 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5790d0bd-1660-4c11-af86-e99d9a2aabf8-kube-api-access-dhzkh" (OuterVolumeSpecName: "kube-api-access-dhzkh") pod "5790d0bd-1660-4c11-af86-e99d9a2aabf8" (UID: "5790d0bd-1660-4c11-af86-e99d9a2aabf8"). InnerVolumeSpecName "kube-api-access-dhzkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.974503 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5790d0bd-1660-4c11-af86-e99d9a2aabf8" (UID: "5790d0bd-1660-4c11-af86-e99d9a2aabf8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.989679 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5790d0bd-1660-4c11-af86-e99d9a2aabf8" (UID: "5790d0bd-1660-4c11-af86-e99d9a2aabf8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.990412 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.990453 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.991433 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-config" (OuterVolumeSpecName: "config") pod "5790d0bd-1660-4c11-af86-e99d9a2aabf8" (UID: "5790d0bd-1660-4c11-af86-e99d9a2aabf8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:02:12 crc kubenswrapper[4881]: I0126 13:02:12.993457 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5790d0bd-1660-4c11-af86-e99d9a2aabf8" (UID: "5790d0bd-1660-4c11-af86-e99d9a2aabf8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.004969 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.005000 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.005013 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.005022 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.005031 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhzkh\" (UniqueName: \"kubernetes.io/projected/5790d0bd-1660-4c11-af86-e99d9a2aabf8-kube-api-access-dhzkh\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.008319 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5790d0bd-1660-4c11-af86-e99d9a2aabf8" (UID: "5790d0bd-1660-4c11-af86-e99d9a2aabf8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.109031 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5790d0bd-1660-4c11-af86-e99d9a2aabf8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.130763 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.210041 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-combined-ca-bundle\") pod \"36c2d906-18af-4025-ac9d-b142b34586f3\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.210294 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-config-data\") pod \"36c2d906-18af-4025-ac9d-b142b34586f3\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.210382 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-scripts\") pod \"36c2d906-18af-4025-ac9d-b142b34586f3\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.210408 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vjgf\" (UniqueName: \"kubernetes.io/projected/36c2d906-18af-4025-ac9d-b142b34586f3-kube-api-access-9vjgf\") pod \"36c2d906-18af-4025-ac9d-b142b34586f3\" (UID: \"36c2d906-18af-4025-ac9d-b142b34586f3\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.216052 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c2d906-18af-4025-ac9d-b142b34586f3-kube-api-access-9vjgf" (OuterVolumeSpecName: "kube-api-access-9vjgf") pod "36c2d906-18af-4025-ac9d-b142b34586f3" (UID: "36c2d906-18af-4025-ac9d-b142b34586f3"). InnerVolumeSpecName "kube-api-access-9vjgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.217921 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-scripts" (OuterVolumeSpecName: "scripts") pod "36c2d906-18af-4025-ac9d-b142b34586f3" (UID: "36c2d906-18af-4025-ac9d-b142b34586f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.247308 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36c2d906-18af-4025-ac9d-b142b34586f3" (UID: "36c2d906-18af-4025-ac9d-b142b34586f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.258395 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-config-data" (OuterVolumeSpecName: "config-data") pod "36c2d906-18af-4025-ac9d-b142b34586f3" (UID: "36c2d906-18af-4025-ac9d-b142b34586f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.313219 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.313262 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.313276 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vjgf\" (UniqueName: \"kubernetes.io/projected/36c2d906-18af-4025-ac9d-b142b34586f3-kube-api-access-9vjgf\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.313288 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c2d906-18af-4025-ac9d-b142b34586f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.666001 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nzdjt" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.666037 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nzdjt" event={"ID":"36c2d906-18af-4025-ac9d-b142b34586f3","Type":"ContainerDied","Data":"05ffddf5a46c5d733e2ba59dbf0c6446af10151ba8e2680195840c357e1c84c4"} Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.666948 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05ffddf5a46c5d733e2ba59dbf0c6446af10151ba8e2680195840c357e1c84c4" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.672690 4881 generic.go:334] "Generic (PLEG): container finished" podID="23de4f49-5164-41a9-a94c-ef0199801723" containerID="8bbcf32208c854f415247cab060da9b73166ce2a8e8638723fbc704f1897300c" exitCode=0 Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.672765 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23de4f49-5164-41a9-a94c-ef0199801723","Type":"ContainerDied","Data":"8bbcf32208c854f415247cab060da9b73166ce2a8e8638723fbc704f1897300c"} Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.676498 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.677036 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d675956bc-pd9zd" event={"ID":"5790d0bd-1660-4c11-af86-e99d9a2aabf8","Type":"ContainerDied","Data":"a665acb91d5333c64b9645919b1cb800cd8f838a59ea8cd7c1250fb3b9b8a951"} Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.677099 4881 scope.go:117] "RemoveContainer" containerID="b912e0916cd54d1360e533a0393cf94b47122f9712df5756b72a472f4e47910a" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.718644 4881 scope.go:117] "RemoveContainer" containerID="bcf1597dc02b74b38075905682f066a714f7ee4660bf6f4ceef8df975de06518" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.752628 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-pd9zd"] Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.795641 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-pd9zd"] Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.817656 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.817922 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerName="nova-api-log" containerID="cri-o://5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094" gracePeriod=30 Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.818387 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerName="nova-api-api" containerID="cri-o://50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc" gracePeriod=30 Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.831468 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.843551 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.843770 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0abb3e29-3399-43d4-86bc-f36f23b3f682" containerName="nova-metadata-log" containerID="cri-o://a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce" gracePeriod=30 Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.844228 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0abb3e29-3399-43d4-86bc-f36f23b3f682" containerName="nova-metadata-metadata" containerID="cri-o://f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d" gracePeriod=30 Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.884483 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.927189 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-log-httpd\") pod \"23de4f49-5164-41a9-a94c-ef0199801723\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.927244 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-sg-core-conf-yaml\") pod \"23de4f49-5164-41a9-a94c-ef0199801723\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.927294 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-config-data\") pod \"23de4f49-5164-41a9-a94c-ef0199801723\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.927334 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g758\" (UniqueName: \"kubernetes.io/projected/23de4f49-5164-41a9-a94c-ef0199801723-kube-api-access-6g758\") pod \"23de4f49-5164-41a9-a94c-ef0199801723\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.927390 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-scripts\") pod \"23de4f49-5164-41a9-a94c-ef0199801723\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.927469 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-combined-ca-bundle\") pod \"23de4f49-5164-41a9-a94c-ef0199801723\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.927608 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-run-httpd\") pod \"23de4f49-5164-41a9-a94c-ef0199801723\" (UID: \"23de4f49-5164-41a9-a94c-ef0199801723\") " Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.928629 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "23de4f49-5164-41a9-a94c-ef0199801723" (UID: "23de4f49-5164-41a9-a94c-ef0199801723"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.929559 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "23de4f49-5164-41a9-a94c-ef0199801723" (UID: "23de4f49-5164-41a9-a94c-ef0199801723"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.935970 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23de4f49-5164-41a9-a94c-ef0199801723-kube-api-access-6g758" (OuterVolumeSpecName: "kube-api-access-6g758") pod "23de4f49-5164-41a9-a94c-ef0199801723" (UID: "23de4f49-5164-41a9-a94c-ef0199801723"). InnerVolumeSpecName "kube-api-access-6g758". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:13 crc kubenswrapper[4881]: I0126 13:02:13.939666 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-scripts" (OuterVolumeSpecName: "scripts") pod "23de4f49-5164-41a9-a94c-ef0199801723" (UID: "23de4f49-5164-41a9-a94c-ef0199801723"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.021365 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "23de4f49-5164-41a9-a94c-ef0199801723" (UID: "23de4f49-5164-41a9-a94c-ef0199801723"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.034016 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g758\" (UniqueName: \"kubernetes.io/projected/23de4f49-5164-41a9-a94c-ef0199801723-kube-api-access-6g758\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.034057 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.034067 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.034078 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23de4f49-5164-41a9-a94c-ef0199801723-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.034087 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.072706 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23de4f49-5164-41a9-a94c-ef0199801723" (UID: "23de4f49-5164-41a9-a94c-ef0199801723"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.093876 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5790d0bd-1660-4c11-af86-e99d9a2aabf8" path="/var/lib/kubelet/pods/5790d0bd-1660-4c11-af86-e99d9a2aabf8/volumes" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.101813 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-config-data" (OuterVolumeSpecName: "config-data") pod "23de4f49-5164-41a9-a94c-ef0199801723" (UID: "23de4f49-5164-41a9-a94c-ef0199801723"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.135809 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.135850 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23de4f49-5164-41a9-a94c-ef0199801723-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.383262 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.456120 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-combined-ca-bundle\") pod \"0abb3e29-3399-43d4-86bc-f36f23b3f682\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.456172 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfdg6\" (UniqueName: \"kubernetes.io/projected/0abb3e29-3399-43d4-86bc-f36f23b3f682-kube-api-access-rfdg6\") pod \"0abb3e29-3399-43d4-86bc-f36f23b3f682\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.456335 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-nova-metadata-tls-certs\") pod \"0abb3e29-3399-43d4-86bc-f36f23b3f682\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.456371 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0abb3e29-3399-43d4-86bc-f36f23b3f682-logs\") pod \"0abb3e29-3399-43d4-86bc-f36f23b3f682\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.456395 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-config-data\") pod \"0abb3e29-3399-43d4-86bc-f36f23b3f682\" (UID: \"0abb3e29-3399-43d4-86bc-f36f23b3f682\") " Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.457256 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0abb3e29-3399-43d4-86bc-f36f23b3f682-logs" (OuterVolumeSpecName: "logs") pod "0abb3e29-3399-43d4-86bc-f36f23b3f682" (UID: "0abb3e29-3399-43d4-86bc-f36f23b3f682"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.464976 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0abb3e29-3399-43d4-86bc-f36f23b3f682-kube-api-access-rfdg6" (OuterVolumeSpecName: "kube-api-access-rfdg6") pod "0abb3e29-3399-43d4-86bc-f36f23b3f682" (UID: "0abb3e29-3399-43d4-86bc-f36f23b3f682"). InnerVolumeSpecName "kube-api-access-rfdg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.492950 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0abb3e29-3399-43d4-86bc-f36f23b3f682" (UID: "0abb3e29-3399-43d4-86bc-f36f23b3f682"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.494317 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-config-data" (OuterVolumeSpecName: "config-data") pod "0abb3e29-3399-43d4-86bc-f36f23b3f682" (UID: "0abb3e29-3399-43d4-86bc-f36f23b3f682"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.536632 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0abb3e29-3399-43d4-86bc-f36f23b3f682" (UID: "0abb3e29-3399-43d4-86bc-f36f23b3f682"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.558452 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.558485 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.558496 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfdg6\" (UniqueName: \"kubernetes.io/projected/0abb3e29-3399-43d4-86bc-f36f23b3f682-kube-api-access-rfdg6\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.558505 4881 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0abb3e29-3399-43d4-86bc-f36f23b3f682-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.558554 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0abb3e29-3399-43d4-86bc-f36f23b3f682-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.688052 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23de4f49-5164-41a9-a94c-ef0199801723","Type":"ContainerDied","Data":"2dcdd67a7b4d835c9e7d4eebe95d7d63ecdf9ca2186c8780457ce3f4c87b631c"} Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.688100 4881 scope.go:117] "RemoveContainer" containerID="67730dfc763196cefd269986c9f1291193f57a2bea98c3ee23af056f25d9db8c" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.688199 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.697636 4881 generic.go:334] "Generic (PLEG): container finished" podID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerID="5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094" exitCode=143 Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.697750 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff1ef10b-9838-42f7-a2d9-b4907440336c","Type":"ContainerDied","Data":"5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094"} Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.702219 4881 generic.go:334] "Generic (PLEG): container finished" podID="0abb3e29-3399-43d4-86bc-f36f23b3f682" containerID="f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d" exitCode=0 Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.702240 4881 generic.go:334] "Generic (PLEG): container finished" podID="0abb3e29-3399-43d4-86bc-f36f23b3f682" containerID="a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce" exitCode=143 Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.702378 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b5a078a6-15d0-4f45-a167-d5ec218210ef" containerName="nova-scheduler-scheduler" containerID="cri-o://6e20e0735b9a019110b29ce9fa132c0a3f7c3f92e8258c25d98729163dcfaa47" gracePeriod=30 Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.702664 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.702800 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0abb3e29-3399-43d4-86bc-f36f23b3f682","Type":"ContainerDied","Data":"f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d"} Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.702835 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0abb3e29-3399-43d4-86bc-f36f23b3f682","Type":"ContainerDied","Data":"a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce"} Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.702847 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0abb3e29-3399-43d4-86bc-f36f23b3f682","Type":"ContainerDied","Data":"5cecc6fdaf063ffaa924c7e7c5ee76d60b6005ea4959995ceb589f42d7e60625"} Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.726979 4881 scope.go:117] "RemoveContainer" containerID="64dd634f6e163e4b22c8f796e5145fac6385f5e81fb0b90f079fa8d1159330e5" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.738423 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.748552 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.772453 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.772947 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abb3e29-3399-43d4-86bc-f36f23b3f682" containerName="nova-metadata-log" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.772972 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abb3e29-3399-43d4-86bc-f36f23b3f682" containerName="nova-metadata-log" Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.772989 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="sg-core" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.772999 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="sg-core" Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.773018 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="proxy-httpd" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773029 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="proxy-httpd" Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.773045 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="ceilometer-notification-agent" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773052 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="ceilometer-notification-agent" Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.773064 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c2d906-18af-4025-ac9d-b142b34586f3" containerName="nova-manage" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773072 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c2d906-18af-4025-ac9d-b142b34586f3" containerName="nova-manage" Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.773102 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5790d0bd-1660-4c11-af86-e99d9a2aabf8" containerName="dnsmasq-dns" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773111 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5790d0bd-1660-4c11-af86-e99d9a2aabf8" containerName="dnsmasq-dns" Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.773120 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5790d0bd-1660-4c11-af86-e99d9a2aabf8" containerName="init" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773127 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5790d0bd-1660-4c11-af86-e99d9a2aabf8" containerName="init" Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.773138 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="ceilometer-central-agent" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773147 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="ceilometer-central-agent" Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.773174 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abb3e29-3399-43d4-86bc-f36f23b3f682" containerName="nova-metadata-metadata" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773182 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abb3e29-3399-43d4-86bc-f36f23b3f682" containerName="nova-metadata-metadata" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773403 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5790d0bd-1660-4c11-af86-e99d9a2aabf8" containerName="dnsmasq-dns" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773418 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abb3e29-3399-43d4-86bc-f36f23b3f682" containerName="nova-metadata-metadata" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773441 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="ceilometer-central-agent" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773452 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abb3e29-3399-43d4-86bc-f36f23b3f682" containerName="nova-metadata-log" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773464 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="ceilometer-notification-agent" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773475 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="36c2d906-18af-4025-ac9d-b142b34586f3" containerName="nova-manage" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773494 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="sg-core" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.773507 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="23de4f49-5164-41a9-a94c-ef0199801723" containerName="proxy-httpd" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.779629 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.782195 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.782410 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.782467 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.798297 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.843010 4881 scope.go:117] "RemoveContainer" containerID="8bbcf32208c854f415247cab060da9b73166ce2a8e8638723fbc704f1897300c" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.853142 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.859509 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.871957 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.873939 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.876673 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.877018 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.878324 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-log-httpd\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.882199 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.882606 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.882883 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-scripts\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.883021 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-run-httpd\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.883173 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.887059 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-config-data\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.887196 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nckb\" (UniqueName: \"kubernetes.io/projected/d8cb1811-603c-47eb-bcf0-37d705b75e5b-kube-api-access-6nckb\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.887368 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.896753 4881 scope.go:117] "RemoveContainer" containerID="0952eb3c0672ad1a35c083736755f5b8e1080e6579cd5490612199d5f6c92cba" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.921295 4881 scope.go:117] "RemoveContainer" containerID="f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.939955 4881 scope.go:117] "RemoveContainer" containerID="a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.958420 4881 scope.go:117] "RemoveContainer" containerID="f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d" Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.958831 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d\": container with ID starting with f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d not found: ID does not exist" containerID="f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.958863 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d"} err="failed to get container status \"f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d\": rpc error: code = NotFound desc = could not find container \"f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d\": container with ID starting with f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d not found: ID does not exist" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.958883 4881 scope.go:117] "RemoveContainer" containerID="a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce" Jan 26 13:02:14 crc kubenswrapper[4881]: E0126 13:02:14.959225 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce\": container with ID starting with a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce not found: ID does not exist" containerID="a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.959247 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce"} err="failed to get container status \"a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce\": rpc error: code = NotFound desc = could not find container \"a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce\": container with ID starting with a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce not found: ID does not exist" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.959354 4881 scope.go:117] "RemoveContainer" containerID="f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.959888 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d"} err="failed to get container status \"f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d\": rpc error: code = NotFound desc = could not find container \"f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d\": container with ID starting with f8918d6c0d25b8b8d63ec42af464fac7f5c8c544df0008090201bc71aa174e1d not found: ID does not exist" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.959927 4881 scope.go:117] "RemoveContainer" containerID="a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.960212 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce"} err="failed to get container status \"a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce\": rpc error: code = NotFound desc = could not find container \"a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce\": container with ID starting with a3d8dde2610bb63f2192431e98166e757a31e163828850d7f2451586db7724ce not found: ID does not exist" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.989771 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.989854 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-run-httpd\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.989957 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.990034 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-config-data\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.990076 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-config-data\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.990125 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nckb\" (UniqueName: \"kubernetes.io/projected/d8cb1811-603c-47eb-bcf0-37d705b75e5b-kube-api-access-6nckb\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.990176 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/563a0ca1-a6c3-4089-88a7-f23423418751-logs\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.990276 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.990323 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-run-httpd\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.990346 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgw4j\" (UniqueName: \"kubernetes.io/projected/563a0ca1-a6c3-4089-88a7-f23423418751-kube-api-access-fgw4j\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.992194 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-log-httpd\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.992410 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.992496 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.992579 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-scripts\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:14 crc kubenswrapper[4881]: I0126 13:02:14.992997 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-log-httpd\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.000013 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.006271 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.006460 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.007955 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-scripts\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.008150 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nckb\" (UniqueName: \"kubernetes.io/projected/d8cb1811-603c-47eb-bcf0-37d705b75e5b-kube-api-access-6nckb\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.011551 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-config-data\") pod \"ceilometer-0\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " pod="openstack/ceilometer-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.094687 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.094759 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.094846 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-config-data\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.094882 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/563a0ca1-a6c3-4089-88a7-f23423418751-logs\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.094941 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgw4j\" (UniqueName: \"kubernetes.io/projected/563a0ca1-a6c3-4089-88a7-f23423418751-kube-api-access-fgw4j\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.095470 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/563a0ca1-a6c3-4089-88a7-f23423418751-logs\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.099053 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.099216 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.099856 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-config-data\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.112877 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgw4j\" (UniqueName: \"kubernetes.io/projected/563a0ca1-a6c3-4089-88a7-f23423418751-kube-api-access-fgw4j\") pod \"nova-metadata-0\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.135039 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.197273 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.634621 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.712274 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8cb1811-603c-47eb-bcf0-37d705b75e5b","Type":"ContainerStarted","Data":"eb174cbdf5bc04b76c0cec2105b42550c064b6e575ffdf56ced4aeafffc7441a"} Jan 26 13:02:15 crc kubenswrapper[4881]: I0126 13:02:15.750838 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:15 crc kubenswrapper[4881]: W0126 13:02:15.761217 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod563a0ca1_a6c3_4089_88a7_f23423418751.slice/crio-b2f47d2ce0b4b2ae93bf305936b7af116e466e17637c413cade9835d444ff90d WatchSource:0}: Error finding container b2f47d2ce0b4b2ae93bf305936b7af116e466e17637c413cade9835d444ff90d: Status 404 returned error can't find the container with id b2f47d2ce0b4b2ae93bf305936b7af116e466e17637c413cade9835d444ff90d Jan 26 13:02:16 crc kubenswrapper[4881]: E0126 13:02:16.063171 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e20e0735b9a019110b29ce9fa132c0a3f7c3f92e8258c25d98729163dcfaa47" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 13:02:16 crc kubenswrapper[4881]: E0126 13:02:16.065573 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e20e0735b9a019110b29ce9fa132c0a3f7c3f92e8258c25d98729163dcfaa47" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 13:02:16 crc kubenswrapper[4881]: E0126 13:02:16.067248 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e20e0735b9a019110b29ce9fa132c0a3f7c3f92e8258c25d98729163dcfaa47" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 13:02:16 crc kubenswrapper[4881]: E0126 13:02:16.067307 4881 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b5a078a6-15d0-4f45-a167-d5ec218210ef" containerName="nova-scheduler-scheduler" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.105665 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0abb3e29-3399-43d4-86bc-f36f23b3f682" path="/var/lib/kubelet/pods/0abb3e29-3399-43d4-86bc-f36f23b3f682/volumes" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.107085 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23de4f49-5164-41a9-a94c-ef0199801723" path="/var/lib/kubelet/pods/23de4f49-5164-41a9-a94c-ef0199801723/volumes" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.400035 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.522699 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-combined-ca-bundle\") pod \"ff1ef10b-9838-42f7-a2d9-b4907440336c\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.522848 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff1ef10b-9838-42f7-a2d9-b4907440336c-logs\") pod \"ff1ef10b-9838-42f7-a2d9-b4907440336c\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.522989 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w87t7\" (UniqueName: \"kubernetes.io/projected/ff1ef10b-9838-42f7-a2d9-b4907440336c-kube-api-access-w87t7\") pod \"ff1ef10b-9838-42f7-a2d9-b4907440336c\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.523032 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-config-data\") pod \"ff1ef10b-9838-42f7-a2d9-b4907440336c\" (UID: \"ff1ef10b-9838-42f7-a2d9-b4907440336c\") " Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.524128 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1ef10b-9838-42f7-a2d9-b4907440336c-logs" (OuterVolumeSpecName: "logs") pod "ff1ef10b-9838-42f7-a2d9-b4907440336c" (UID: "ff1ef10b-9838-42f7-a2d9-b4907440336c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.531807 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1ef10b-9838-42f7-a2d9-b4907440336c-kube-api-access-w87t7" (OuterVolumeSpecName: "kube-api-access-w87t7") pod "ff1ef10b-9838-42f7-a2d9-b4907440336c" (UID: "ff1ef10b-9838-42f7-a2d9-b4907440336c"). InnerVolumeSpecName "kube-api-access-w87t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.548371 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff1ef10b-9838-42f7-a2d9-b4907440336c" (UID: "ff1ef10b-9838-42f7-a2d9-b4907440336c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.561466 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-config-data" (OuterVolumeSpecName: "config-data") pod "ff1ef10b-9838-42f7-a2d9-b4907440336c" (UID: "ff1ef10b-9838-42f7-a2d9-b4907440336c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.624854 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w87t7\" (UniqueName: \"kubernetes.io/projected/ff1ef10b-9838-42f7-a2d9-b4907440336c-kube-api-access-w87t7\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.625115 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.625125 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1ef10b-9838-42f7-a2d9-b4907440336c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.625132 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff1ef10b-9838-42f7-a2d9-b4907440336c-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.742823 4881 generic.go:334] "Generic (PLEG): container finished" podID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerID="50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc" exitCode=0 Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.742918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff1ef10b-9838-42f7-a2d9-b4907440336c","Type":"ContainerDied","Data":"50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc"} Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.742956 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff1ef10b-9838-42f7-a2d9-b4907440336c","Type":"ContainerDied","Data":"3dbe7cde860cee54d1a360c4b339fcf2486b6f01aeac0d3498690af5b3192c1e"} Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.742981 4881 scope.go:117] "RemoveContainer" containerID="50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.743145 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.754450 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"563a0ca1-a6c3-4089-88a7-f23423418751","Type":"ContainerStarted","Data":"08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27"} Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.754807 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"563a0ca1-a6c3-4089-88a7-f23423418751","Type":"ContainerStarted","Data":"e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d"} Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.754842 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"563a0ca1-a6c3-4089-88a7-f23423418751","Type":"ContainerStarted","Data":"b2f47d2ce0b4b2ae93bf305936b7af116e466e17637c413cade9835d444ff90d"} Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.757128 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8cb1811-603c-47eb-bcf0-37d705b75e5b","Type":"ContainerStarted","Data":"5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441"} Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.757214 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8cb1811-603c-47eb-bcf0-37d705b75e5b","Type":"ContainerStarted","Data":"6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc"} Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.799338 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.799307471 podStartE2EDuration="2.799307471s" podCreationTimestamp="2026-01-26 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:16.782595587 +0000 UTC m=+1609.261905633" watchObservedRunningTime="2026-01-26 13:02:16.799307471 +0000 UTC m=+1609.278617507" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.835773 4881 scope.go:117] "RemoveContainer" containerID="5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.859233 4881 scope.go:117] "RemoveContainer" containerID="50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.859330 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:16 crc kubenswrapper[4881]: E0126 13:02:16.862905 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc\": container with ID starting with 50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc not found: ID does not exist" containerID="50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.862936 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc"} err="failed to get container status \"50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc\": rpc error: code = NotFound desc = could not find container \"50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc\": container with ID starting with 50c33526f9178c59e6b8194b5cb88a3b9b184c1bc2e90d48c23e7e7cf0c72dfc not found: ID does not exist" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.862959 4881 scope.go:117] "RemoveContainer" containerID="5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094" Jan 26 13:02:16 crc kubenswrapper[4881]: E0126 13:02:16.865330 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094\": container with ID starting with 5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094 not found: ID does not exist" containerID="5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.865377 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094"} err="failed to get container status \"5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094\": rpc error: code = NotFound desc = could not find container \"5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094\": container with ID starting with 5f739a30bd57a180db68f5f26895ba9084f0f371f4e467a5ec3870fb4c04b094 not found: ID does not exist" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.871832 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.883927 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:16 crc kubenswrapper[4881]: E0126 13:02:16.884404 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerName="nova-api-log" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.884424 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerName="nova-api-log" Jan 26 13:02:16 crc kubenswrapper[4881]: E0126 13:02:16.884452 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerName="nova-api-api" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.884462 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerName="nova-api-api" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.884711 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerName="nova-api-log" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.884731 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" containerName="nova-api-api" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.885902 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.888529 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.894499 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.938355 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5b3bfa7-449d-4e9c-b83a-592d38765699-logs\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.938430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd84w\" (UniqueName: \"kubernetes.io/projected/e5b3bfa7-449d-4e9c-b83a-592d38765699-kube-api-access-nd84w\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.938502 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:16 crc kubenswrapper[4881]: I0126 13:02:16.938551 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-config-data\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.040220 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.040283 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-config-data\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.040364 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5b3bfa7-449d-4e9c-b83a-592d38765699-logs\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.040407 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd84w\" (UniqueName: \"kubernetes.io/projected/e5b3bfa7-449d-4e9c-b83a-592d38765699-kube-api-access-nd84w\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.040997 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5b3bfa7-449d-4e9c-b83a-592d38765699-logs\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.051115 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.052967 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-config-data\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.059206 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd84w\" (UniqueName: \"kubernetes.io/projected/e5b3bfa7-449d-4e9c-b83a-592d38765699-kube-api-access-nd84w\") pod \"nova-api-0\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " pod="openstack/nova-api-0" Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.257324 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.761820 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.777806 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8cb1811-603c-47eb-bcf0-37d705b75e5b","Type":"ContainerStarted","Data":"5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856"} Jan 26 13:02:17 crc kubenswrapper[4881]: I0126 13:02:17.779847 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5b3bfa7-449d-4e9c-b83a-592d38765699","Type":"ContainerStarted","Data":"81d18184fda84e1b9c92098a24ffde1adc1ea965c0c2da9575f0857cf626dc09"} Jan 26 13:02:18 crc kubenswrapper[4881]: I0126 13:02:18.110220 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1ef10b-9838-42f7-a2d9-b4907440336c" path="/var/lib/kubelet/pods/ff1ef10b-9838-42f7-a2d9-b4907440336c/volumes" Jan 26 13:02:18 crc kubenswrapper[4881]: I0126 13:02:18.793218 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5b3bfa7-449d-4e9c-b83a-592d38765699","Type":"ContainerStarted","Data":"d0132dfc94a6a42d1c87b2e769248a60ecf87cc575bb9edf3d60e8a031cac6a3"} Jan 26 13:02:18 crc kubenswrapper[4881]: I0126 13:02:18.793472 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5b3bfa7-449d-4e9c-b83a-592d38765699","Type":"ContainerStarted","Data":"5c0b92e62f31e71a37ca293bbb15a384ca043238dabe8a38fef35c98d418b0d5"} Jan 26 13:02:18 crc kubenswrapper[4881]: I0126 13:02:18.798798 4881 generic.go:334] "Generic (PLEG): container finished" podID="b5a078a6-15d0-4f45-a167-d5ec218210ef" containerID="6e20e0735b9a019110b29ce9fa132c0a3f7c3f92e8258c25d98729163dcfaa47" exitCode=0 Jan 26 13:02:18 crc kubenswrapper[4881]: I0126 13:02:18.798847 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b5a078a6-15d0-4f45-a167-d5ec218210ef","Type":"ContainerDied","Data":"6e20e0735b9a019110b29ce9fa132c0a3f7c3f92e8258c25d98729163dcfaa47"} Jan 26 13:02:18 crc kubenswrapper[4881]: I0126 13:02:18.815117 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.815093862 podStartE2EDuration="2.815093862s" podCreationTimestamp="2026-01-26 13:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:18.809706862 +0000 UTC m=+1611.289016888" watchObservedRunningTime="2026-01-26 13:02:18.815093862 +0000 UTC m=+1611.294403888" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.223814 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.284277 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfkd8\" (UniqueName: \"kubernetes.io/projected/b5a078a6-15d0-4f45-a167-d5ec218210ef-kube-api-access-cfkd8\") pod \"b5a078a6-15d0-4f45-a167-d5ec218210ef\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.284546 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-config-data\") pod \"b5a078a6-15d0-4f45-a167-d5ec218210ef\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.284644 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-combined-ca-bundle\") pod \"b5a078a6-15d0-4f45-a167-d5ec218210ef\" (UID: \"b5a078a6-15d0-4f45-a167-d5ec218210ef\") " Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.291666 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a078a6-15d0-4f45-a167-d5ec218210ef-kube-api-access-cfkd8" (OuterVolumeSpecName: "kube-api-access-cfkd8") pod "b5a078a6-15d0-4f45-a167-d5ec218210ef" (UID: "b5a078a6-15d0-4f45-a167-d5ec218210ef"). InnerVolumeSpecName "kube-api-access-cfkd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.325667 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-config-data" (OuterVolumeSpecName: "config-data") pod "b5a078a6-15d0-4f45-a167-d5ec218210ef" (UID: "b5a078a6-15d0-4f45-a167-d5ec218210ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.331006 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5a078a6-15d0-4f45-a167-d5ec218210ef" (UID: "b5a078a6-15d0-4f45-a167-d5ec218210ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.386416 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.386446 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a078a6-15d0-4f45-a167-d5ec218210ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.386457 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfkd8\" (UniqueName: \"kubernetes.io/projected/b5a078a6-15d0-4f45-a167-d5ec218210ef-kube-api-access-cfkd8\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.811913 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b5a078a6-15d0-4f45-a167-d5ec218210ef","Type":"ContainerDied","Data":"93bedb424afef600b8f6572bf0e29d1135bf7949e595fcd2939a8a24d957a689"} Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.811971 4881 scope.go:117] "RemoveContainer" containerID="6e20e0735b9a019110b29ce9fa132c0a3f7c3f92e8258c25d98729163dcfaa47" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.812077 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.824978 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8cb1811-603c-47eb-bcf0-37d705b75e5b","Type":"ContainerStarted","Data":"428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba"} Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.825183 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.853204 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.868537 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.882309 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:02:19 crc kubenswrapper[4881]: E0126 13:02:19.882876 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a078a6-15d0-4f45-a167-d5ec218210ef" containerName="nova-scheduler-scheduler" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.882898 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a078a6-15d0-4f45-a167-d5ec218210ef" containerName="nova-scheduler-scheduler" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.883168 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a078a6-15d0-4f45-a167-d5ec218210ef" containerName="nova-scheduler-scheduler" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.884304 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.890762 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.898041 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.68137317 podStartE2EDuration="5.898018552s" podCreationTimestamp="2026-01-26 13:02:14 +0000 UTC" firstStartedPulling="2026-01-26 13:02:15.645255852 +0000 UTC m=+1608.124565878" lastFinishedPulling="2026-01-26 13:02:18.861901224 +0000 UTC m=+1611.341211260" observedRunningTime="2026-01-26 13:02:19.874160466 +0000 UTC m=+1612.353470492" watchObservedRunningTime="2026-01-26 13:02:19.898018552 +0000 UTC m=+1612.377328578" Jan 26 13:02:19 crc kubenswrapper[4881]: I0126 13:02:19.914927 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.005097 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.005642 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-config-data\") pod \"nova-scheduler-0\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.005675 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snt5j\" (UniqueName: \"kubernetes.io/projected/4163a60f-62c2-4edb-b675-b25e408ca3bd-kube-api-access-snt5j\") pod \"nova-scheduler-0\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.103236 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a078a6-15d0-4f45-a167-d5ec218210ef" path="/var/lib/kubelet/pods/b5a078a6-15d0-4f45-a167-d5ec218210ef/volumes" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.106663 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-config-data\") pod \"nova-scheduler-0\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.106701 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snt5j\" (UniqueName: \"kubernetes.io/projected/4163a60f-62c2-4edb-b675-b25e408ca3bd-kube-api-access-snt5j\") pod \"nova-scheduler-0\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.106730 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.110237 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.113204 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-config-data\") pod \"nova-scheduler-0\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.124904 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snt5j\" (UniqueName: \"kubernetes.io/projected/4163a60f-62c2-4edb-b675-b25e408ca3bd-kube-api-access-snt5j\") pod \"nova-scheduler-0\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " pod="openstack/nova-scheduler-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.199679 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.199985 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.206134 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.312889 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 13:02:20 crc kubenswrapper[4881]: W0126 13:02:20.732438 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4163a60f_62c2_4edb_b675_b25e408ca3bd.slice/crio-d23ae864343c0a922a0240ca6be9285777b75cd73d0e31ec0ab55e36e285e6f5 WatchSource:0}: Error finding container d23ae864343c0a922a0240ca6be9285777b75cd73d0e31ec0ab55e36e285e6f5: Status 404 returned error can't find the container with id d23ae864343c0a922a0240ca6be9285777b75cd73d0e31ec0ab55e36e285e6f5 Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.736438 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:02:20 crc kubenswrapper[4881]: I0126 13:02:20.834059 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4163a60f-62c2-4edb-b675-b25e408ca3bd","Type":"ContainerStarted","Data":"d23ae864343c0a922a0240ca6be9285777b75cd73d0e31ec0ab55e36e285e6f5"} Jan 26 13:02:21 crc kubenswrapper[4881]: I0126 13:02:21.847259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4163a60f-62c2-4edb-b675-b25e408ca3bd","Type":"ContainerStarted","Data":"db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9"} Jan 26 13:02:21 crc kubenswrapper[4881]: I0126 13:02:21.871063 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.87101959 podStartE2EDuration="2.87101959s" podCreationTimestamp="2026-01-26 13:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:21.86645967 +0000 UTC m=+1614.345769706" watchObservedRunningTime="2026-01-26 13:02:21.87101959 +0000 UTC m=+1614.350329616" Jan 26 13:02:25 crc kubenswrapper[4881]: I0126 13:02:25.199097 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 13:02:25 crc kubenswrapper[4881]: I0126 13:02:25.199548 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 13:02:25 crc kubenswrapper[4881]: I0126 13:02:25.207067 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 13:02:26 crc kubenswrapper[4881]: I0126 13:02:26.213746 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:02:26 crc kubenswrapper[4881]: I0126 13:02:26.213769 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:02:27 crc kubenswrapper[4881]: I0126 13:02:27.258175 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:02:27 crc kubenswrapper[4881]: I0126 13:02:27.258233 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:02:28 crc kubenswrapper[4881]: I0126 13:02:28.342086 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:02:28 crc kubenswrapper[4881]: I0126 13:02:28.342826 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:02:30 crc kubenswrapper[4881]: I0126 13:02:30.206640 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 13:02:30 crc kubenswrapper[4881]: I0126 13:02:30.258626 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 13:02:30 crc kubenswrapper[4881]: I0126 13:02:30.980119 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 13:02:35 crc kubenswrapper[4881]: I0126 13:02:34.999711 4881 generic.go:334] "Generic (PLEG): container finished" podID="07b4140b-bd1d-4295-961e-99d18c4406c3" containerID="094fe90e94e3e35810d16d888a4028224036b85f76f85e1088a1f49e9407b9b4" exitCode=0 Jan 26 13:02:35 crc kubenswrapper[4881]: I0126 13:02:34.999845 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bhqzs" event={"ID":"07b4140b-bd1d-4295-961e-99d18c4406c3","Type":"ContainerDied","Data":"094fe90e94e3e35810d16d888a4028224036b85f76f85e1088a1f49e9407b9b4"} Jan 26 13:02:35 crc kubenswrapper[4881]: I0126 13:02:35.211553 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 13:02:35 crc kubenswrapper[4881]: I0126 13:02:35.212886 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 13:02:35 crc kubenswrapper[4881]: I0126 13:02:35.217112 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.028148 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.455590 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.580812 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-config-data\") pod \"07b4140b-bd1d-4295-961e-99d18c4406c3\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.581061 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-scripts\") pod \"07b4140b-bd1d-4295-961e-99d18c4406c3\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.581113 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qxw7\" (UniqueName: \"kubernetes.io/projected/07b4140b-bd1d-4295-961e-99d18c4406c3-kube-api-access-7qxw7\") pod \"07b4140b-bd1d-4295-961e-99d18c4406c3\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.581163 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-combined-ca-bundle\") pod \"07b4140b-bd1d-4295-961e-99d18c4406c3\" (UID: \"07b4140b-bd1d-4295-961e-99d18c4406c3\") " Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.588075 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-scripts" (OuterVolumeSpecName: "scripts") pod "07b4140b-bd1d-4295-961e-99d18c4406c3" (UID: "07b4140b-bd1d-4295-961e-99d18c4406c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.588189 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b4140b-bd1d-4295-961e-99d18c4406c3-kube-api-access-7qxw7" (OuterVolumeSpecName: "kube-api-access-7qxw7") pod "07b4140b-bd1d-4295-961e-99d18c4406c3" (UID: "07b4140b-bd1d-4295-961e-99d18c4406c3"). InnerVolumeSpecName "kube-api-access-7qxw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.620227 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07b4140b-bd1d-4295-961e-99d18c4406c3" (UID: "07b4140b-bd1d-4295-961e-99d18c4406c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.637596 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-config-data" (OuterVolumeSpecName: "config-data") pod "07b4140b-bd1d-4295-961e-99d18c4406c3" (UID: "07b4140b-bd1d-4295-961e-99d18c4406c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.685276 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.685306 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.685320 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qxw7\" (UniqueName: \"kubernetes.io/projected/07b4140b-bd1d-4295-961e-99d18c4406c3-kube-api-access-7qxw7\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.685332 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b4140b-bd1d-4295-961e-99d18c4406c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:36 crc kubenswrapper[4881]: I0126 13:02:36.914881 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.033849 4881 generic.go:334] "Generic (PLEG): container finished" podID="046a94f5-8350-4db1-9bc4-565b40b3c9bc" containerID="fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b" exitCode=137 Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.034305 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"046a94f5-8350-4db1-9bc4-565b40b3c9bc","Type":"ContainerDied","Data":"fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b"} Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.034348 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"046a94f5-8350-4db1-9bc4-565b40b3c9bc","Type":"ContainerDied","Data":"f0184e0df5bac0ae34ee0a69572218666674cc84c16c2d8a3b519aea5fa4b546"} Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.034375 4881 scope.go:117] "RemoveContainer" containerID="fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.034590 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.041798 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bhqzs" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.042606 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bhqzs" event={"ID":"07b4140b-bd1d-4295-961e-99d18c4406c3","Type":"ContainerDied","Data":"e9fa003d7b129504c664a0a5e7b556a6c73866dfafca28815ecbe43032fbb9ed"} Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.042802 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9fa003d7b129504c664a0a5e7b556a6c73866dfafca28815ecbe43032fbb9ed" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.081732 4881 scope.go:117] "RemoveContainer" containerID="fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b" Jan 26 13:02:37 crc kubenswrapper[4881]: E0126 13:02:37.083903 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b\": container with ID starting with fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b not found: ID does not exist" containerID="fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.083971 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b"} err="failed to get container status \"fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b\": rpc error: code = NotFound desc = could not find container \"fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b\": container with ID starting with fbba250b46d93fb4d5e34e9d06da663c89c14a443777f87d16224a9bf0161b3b not found: ID does not exist" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.091499 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-config-data\") pod \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.091710 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx2s7\" (UniqueName: \"kubernetes.io/projected/046a94f5-8350-4db1-9bc4-565b40b3c9bc-kube-api-access-jx2s7\") pod \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.092035 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-combined-ca-bundle\") pod \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\" (UID: \"046a94f5-8350-4db1-9bc4-565b40b3c9bc\") " Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.109473 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/046a94f5-8350-4db1-9bc4-565b40b3c9bc-kube-api-access-jx2s7" (OuterVolumeSpecName: "kube-api-access-jx2s7") pod "046a94f5-8350-4db1-9bc4-565b40b3c9bc" (UID: "046a94f5-8350-4db1-9bc4-565b40b3c9bc"). InnerVolumeSpecName "kube-api-access-jx2s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.166807 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 13:02:37 crc kubenswrapper[4881]: E0126 13:02:37.168041 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046a94f5-8350-4db1-9bc4-565b40b3c9bc" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.168067 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="046a94f5-8350-4db1-9bc4-565b40b3c9bc" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 13:02:37 crc kubenswrapper[4881]: E0126 13:02:37.168115 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b4140b-bd1d-4295-961e-99d18c4406c3" containerName="nova-cell1-conductor-db-sync" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.168128 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b4140b-bd1d-4295-961e-99d18c4406c3" containerName="nova-cell1-conductor-db-sync" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.168858 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b4140b-bd1d-4295-961e-99d18c4406c3" containerName="nova-cell1-conductor-db-sync" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.168919 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="046a94f5-8350-4db1-9bc4-565b40b3c9bc" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.171189 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.176020 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.188271 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "046a94f5-8350-4db1-9bc4-565b40b3c9bc" (UID: "046a94f5-8350-4db1-9bc4-565b40b3c9bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.192060 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-config-data" (OuterVolumeSpecName: "config-data") pod "046a94f5-8350-4db1-9bc4-565b40b3c9bc" (UID: "046a94f5-8350-4db1-9bc4-565b40b3c9bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.198720 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c039ccd0-c1d4-438f-ba49-d44103884a26-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c039ccd0-c1d4-438f-ba49-d44103884a26\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.198834 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c039ccd0-c1d4-438f-ba49-d44103884a26-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c039ccd0-c1d4-438f-ba49-d44103884a26\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.199259 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn95p\" (UniqueName: \"kubernetes.io/projected/c039ccd0-c1d4-438f-ba49-d44103884a26-kube-api-access-gn95p\") pod \"nova-cell1-conductor-0\" (UID: \"c039ccd0-c1d4-438f-ba49-d44103884a26\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.199327 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.199343 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx2s7\" (UniqueName: \"kubernetes.io/projected/046a94f5-8350-4db1-9bc4-565b40b3c9bc-kube-api-access-jx2s7\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.199354 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046a94f5-8350-4db1-9bc4-565b40b3c9bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.202813 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.266900 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.267303 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.268676 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.272092 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.300995 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn95p\" (UniqueName: \"kubernetes.io/projected/c039ccd0-c1d4-438f-ba49-d44103884a26-kube-api-access-gn95p\") pod \"nova-cell1-conductor-0\" (UID: \"c039ccd0-c1d4-438f-ba49-d44103884a26\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.301078 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c039ccd0-c1d4-438f-ba49-d44103884a26-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c039ccd0-c1d4-438f-ba49-d44103884a26\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.301166 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c039ccd0-c1d4-438f-ba49-d44103884a26-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c039ccd0-c1d4-438f-ba49-d44103884a26\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.304290 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c039ccd0-c1d4-438f-ba49-d44103884a26-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c039ccd0-c1d4-438f-ba49-d44103884a26\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.305491 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c039ccd0-c1d4-438f-ba49-d44103884a26-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c039ccd0-c1d4-438f-ba49-d44103884a26\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.321192 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn95p\" (UniqueName: \"kubernetes.io/projected/c039ccd0-c1d4-438f-ba49-d44103884a26-kube-api-access-gn95p\") pod \"nova-cell1-conductor-0\" (UID: \"c039ccd0-c1d4-438f-ba49-d44103884a26\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.454687 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.466005 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.488512 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.490687 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.494661 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.494970 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.495510 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.498919 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.504978 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.505110 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.505152 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv8ph\" (UniqueName: \"kubernetes.io/projected/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-kube-api-access-tv8ph\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.505236 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.505271 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.508096 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.607275 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.607676 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.607761 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.607989 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.608048 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv8ph\" (UniqueName: \"kubernetes.io/projected/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-kube-api-access-tv8ph\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.615540 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.615812 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.616392 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.618288 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.626933 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv8ph\" (UniqueName: \"kubernetes.io/projected/04587c7a-8d6d-4587-9ca5-52d8a9e57a38-kube-api-access-tv8ph\") pod \"nova-cell1-novncproxy-0\" (UID: \"04587c7a-8d6d-4587-9ca5-52d8a9e57a38\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.812613 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:37 crc kubenswrapper[4881]: I0126 13:02:37.937653 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.055611 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c039ccd0-c1d4-438f-ba49-d44103884a26","Type":"ContainerStarted","Data":"1cbc3d74f55a87d7b2195cdaf3743936b82f5d58e904a8f8da0d0ceb8da55c70"} Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.056001 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.061133 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.097720 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="046a94f5-8350-4db1-9bc4-565b40b3c9bc" path="/var/lib/kubelet/pods/046a94f5-8350-4db1-9bc4-565b40b3c9bc/volumes" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.218426 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6999845677-q7hsz"] Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.220071 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.227050 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrqft\" (UniqueName: \"kubernetes.io/projected/79368344-12b4-4647-bce3-f74ede4f953a-kube-api-access-lrqft\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.227116 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-sb\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.227141 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-config\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.227159 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-nb\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.227293 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-swift-storage-0\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.227355 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-svc\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.238875 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6999845677-q7hsz"] Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.280971 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.329488 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-swift-storage-0\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.329911 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-svc\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.330009 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrqft\" (UniqueName: \"kubernetes.io/projected/79368344-12b4-4647-bce3-f74ede4f953a-kube-api-access-lrqft\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.330100 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-sb\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.330131 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-config\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.330150 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-nb\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.331109 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-nb\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.331494 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-sb\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.331505 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-svc\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.331534 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-config\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.332378 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-swift-storage-0\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.346382 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrqft\" (UniqueName: \"kubernetes.io/projected/79368344-12b4-4647-bce3-f74ede4f953a-kube-api-access-lrqft\") pod \"dnsmasq-dns-6999845677-q7hsz\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:38 crc kubenswrapper[4881]: I0126 13:02:38.540266 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:39 crc kubenswrapper[4881]: I0126 13:02:39.018588 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6999845677-q7hsz"] Jan 26 13:02:39 crc kubenswrapper[4881]: I0126 13:02:39.070407 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"04587c7a-8d6d-4587-9ca5-52d8a9e57a38","Type":"ContainerStarted","Data":"9353a8b5340df17a5bceebb83a0a3a47dde421e6d6aef72a6cd0adc6f417e99f"} Jan 26 13:02:39 crc kubenswrapper[4881]: I0126 13:02:39.070780 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"04587c7a-8d6d-4587-9ca5-52d8a9e57a38","Type":"ContainerStarted","Data":"4cf50c39e5e42729dab853486dc320abd984b172e73d9e43c94b39ac207f13c4"} Jan 26 13:02:39 crc kubenswrapper[4881]: I0126 13:02:39.079372 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c039ccd0-c1d4-438f-ba49-d44103884a26","Type":"ContainerStarted","Data":"163fb2d74455bd522686eb5628fdfee8424c32dfae2ae79a08c02af8119c911a"} Jan 26 13:02:39 crc kubenswrapper[4881]: I0126 13:02:39.080196 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:39 crc kubenswrapper[4881]: I0126 13:02:39.083717 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999845677-q7hsz" event={"ID":"79368344-12b4-4647-bce3-f74ede4f953a","Type":"ContainerStarted","Data":"e433ef8c68cbdaf41c49a8c35fd84c4c9014dd5f7ba03db7800d752b6e350524"} Jan 26 13:02:39 crc kubenswrapper[4881]: I0126 13:02:39.095922 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.095900395 podStartE2EDuration="2.095900395s" podCreationTimestamp="2026-01-26 13:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:39.092216646 +0000 UTC m=+1631.571526682" watchObservedRunningTime="2026-01-26 13:02:39.095900395 +0000 UTC m=+1631.575210421" Jan 26 13:02:39 crc kubenswrapper[4881]: I0126 13:02:39.125302 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.125278785 podStartE2EDuration="2.125278785s" podCreationTimestamp="2026-01-26 13:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:39.108896169 +0000 UTC m=+1631.588206195" watchObservedRunningTime="2026-01-26 13:02:39.125278785 +0000 UTC m=+1631.604588811" Jan 26 13:02:40 crc kubenswrapper[4881]: I0126 13:02:40.092263 4881 generic.go:334] "Generic (PLEG): container finished" podID="79368344-12b4-4647-bce3-f74ede4f953a" containerID="7c9e8b3e90a16ac3a415ec320a421d9d197bcb7d41fddfcf5d746b10f5be414d" exitCode=0 Jan 26 13:02:40 crc kubenswrapper[4881]: I0126 13:02:40.096315 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999845677-q7hsz" event={"ID":"79368344-12b4-4647-bce3-f74ede4f953a","Type":"ContainerDied","Data":"7c9e8b3e90a16ac3a415ec320a421d9d197bcb7d41fddfcf5d746b10f5be414d"} Jan 26 13:02:40 crc kubenswrapper[4881]: I0126 13:02:40.736490 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:40 crc kubenswrapper[4881]: I0126 13:02:40.737015 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="ceilometer-central-agent" containerID="cri-o://6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc" gracePeriod=30 Jan 26 13:02:40 crc kubenswrapper[4881]: I0126 13:02:40.737104 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="proxy-httpd" containerID="cri-o://428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba" gracePeriod=30 Jan 26 13:02:40 crc kubenswrapper[4881]: I0126 13:02:40.737158 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="ceilometer-notification-agent" containerID="cri-o://5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441" gracePeriod=30 Jan 26 13:02:40 crc kubenswrapper[4881]: I0126 13:02:40.737212 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="sg-core" containerID="cri-o://5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856" gracePeriod=30 Jan 26 13:02:40 crc kubenswrapper[4881]: I0126 13:02:40.853848 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:40 crc kubenswrapper[4881]: I0126 13:02:40.856833 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.216:3000/\": read tcp 10.217.0.2:56062->10.217.0.216:3000: read: connection reset by peer" Jan 26 13:02:41 crc kubenswrapper[4881]: I0126 13:02:41.112667 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999845677-q7hsz" event={"ID":"79368344-12b4-4647-bce3-f74ede4f953a","Type":"ContainerStarted","Data":"915bed7a2d6c4043946625c843b68c119fd64cd9e10e25880aca9a696dc5993a"} Jan 26 13:02:41 crc kubenswrapper[4881]: I0126 13:02:41.114092 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:41 crc kubenswrapper[4881]: I0126 13:02:41.120782 4881 generic.go:334] "Generic (PLEG): container finished" podID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerID="428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba" exitCode=0 Jan 26 13:02:41 crc kubenswrapper[4881]: I0126 13:02:41.120803 4881 generic.go:334] "Generic (PLEG): container finished" podID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerID="5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856" exitCode=2 Jan 26 13:02:41 crc kubenswrapper[4881]: I0126 13:02:41.120960 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerName="nova-api-log" containerID="cri-o://5c0b92e62f31e71a37ca293bbb15a384ca043238dabe8a38fef35c98d418b0d5" gracePeriod=30 Jan 26 13:02:41 crc kubenswrapper[4881]: I0126 13:02:41.121176 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8cb1811-603c-47eb-bcf0-37d705b75e5b","Type":"ContainerDied","Data":"428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba"} Jan 26 13:02:41 crc kubenswrapper[4881]: I0126 13:02:41.121198 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8cb1811-603c-47eb-bcf0-37d705b75e5b","Type":"ContainerDied","Data":"5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856"} Jan 26 13:02:41 crc kubenswrapper[4881]: I0126 13:02:41.121245 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerName="nova-api-api" containerID="cri-o://d0132dfc94a6a42d1c87b2e769248a60ecf87cc575bb9edf3d60e8a031cac6a3" gracePeriod=30 Jan 26 13:02:41 crc kubenswrapper[4881]: I0126 13:02:41.145208 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6999845677-q7hsz" podStartSLOduration=3.145186147 podStartE2EDuration="3.145186147s" podCreationTimestamp="2026-01-26 13:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:41.136723182 +0000 UTC m=+1633.616033218" watchObservedRunningTime="2026-01-26 13:02:41.145186147 +0000 UTC m=+1633.624496173" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.139533 4881 generic.go:334] "Generic (PLEG): container finished" podID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerID="d0132dfc94a6a42d1c87b2e769248a60ecf87cc575bb9edf3d60e8a031cac6a3" exitCode=0 Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.139865 4881 generic.go:334] "Generic (PLEG): container finished" podID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerID="5c0b92e62f31e71a37ca293bbb15a384ca043238dabe8a38fef35c98d418b0d5" exitCode=143 Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.139919 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5b3bfa7-449d-4e9c-b83a-592d38765699","Type":"ContainerDied","Data":"d0132dfc94a6a42d1c87b2e769248a60ecf87cc575bb9edf3d60e8a031cac6a3"} Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.139947 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5b3bfa7-449d-4e9c-b83a-592d38765699","Type":"ContainerDied","Data":"5c0b92e62f31e71a37ca293bbb15a384ca043238dabe8a38fef35c98d418b0d5"} Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.150361 4881 generic.go:334] "Generic (PLEG): container finished" podID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerID="6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc" exitCode=0 Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.151472 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8cb1811-603c-47eb-bcf0-37d705b75e5b","Type":"ContainerDied","Data":"6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc"} Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.575620 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.718150 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5b3bfa7-449d-4e9c-b83a-592d38765699-logs\") pod \"e5b3bfa7-449d-4e9c-b83a-592d38765699\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.718219 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-combined-ca-bundle\") pod \"e5b3bfa7-449d-4e9c-b83a-592d38765699\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.718385 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-config-data\") pod \"e5b3bfa7-449d-4e9c-b83a-592d38765699\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.718496 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd84w\" (UniqueName: \"kubernetes.io/projected/e5b3bfa7-449d-4e9c-b83a-592d38765699-kube-api-access-nd84w\") pod \"e5b3bfa7-449d-4e9c-b83a-592d38765699\" (UID: \"e5b3bfa7-449d-4e9c-b83a-592d38765699\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.718693 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b3bfa7-449d-4e9c-b83a-592d38765699-logs" (OuterVolumeSpecName: "logs") pod "e5b3bfa7-449d-4e9c-b83a-592d38765699" (UID: "e5b3bfa7-449d-4e9c-b83a-592d38765699"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.719037 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5b3bfa7-449d-4e9c-b83a-592d38765699-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.727749 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b3bfa7-449d-4e9c-b83a-592d38765699-kube-api-access-nd84w" (OuterVolumeSpecName: "kube-api-access-nd84w") pod "e5b3bfa7-449d-4e9c-b83a-592d38765699" (UID: "e5b3bfa7-449d-4e9c-b83a-592d38765699"). InnerVolumeSpecName "kube-api-access-nd84w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.754500 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5b3bfa7-449d-4e9c-b83a-592d38765699" (UID: "e5b3bfa7-449d-4e9c-b83a-592d38765699"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.757023 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-config-data" (OuterVolumeSpecName: "config-data") pod "e5b3bfa7-449d-4e9c-b83a-592d38765699" (UID: "e5b3bfa7-449d-4e9c-b83a-592d38765699"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.797500 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.813681 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.820377 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd84w\" (UniqueName: \"kubernetes.io/projected/e5b3bfa7-449d-4e9c-b83a-592d38765699-kube-api-access-nd84w\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.820421 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.820435 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b3bfa7-449d-4e9c-b83a-592d38765699-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.921389 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-sg-core-conf-yaml\") pod \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.921448 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-log-httpd\") pod \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.921495 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nckb\" (UniqueName: \"kubernetes.io/projected/d8cb1811-603c-47eb-bcf0-37d705b75e5b-kube-api-access-6nckb\") pod \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.921658 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-combined-ca-bundle\") pod \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.921678 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-ceilometer-tls-certs\") pod \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.921701 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-scripts\") pod \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.921762 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-run-httpd\") pod \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.921792 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-config-data\") pod \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\" (UID: \"d8cb1811-603c-47eb-bcf0-37d705b75e5b\") " Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.921820 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d8cb1811-603c-47eb-bcf0-37d705b75e5b" (UID: "d8cb1811-603c-47eb-bcf0-37d705b75e5b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.922197 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.922833 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d8cb1811-603c-47eb-bcf0-37d705b75e5b" (UID: "d8cb1811-603c-47eb-bcf0-37d705b75e5b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.926203 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-scripts" (OuterVolumeSpecName: "scripts") pod "d8cb1811-603c-47eb-bcf0-37d705b75e5b" (UID: "d8cb1811-603c-47eb-bcf0-37d705b75e5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.926294 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8cb1811-603c-47eb-bcf0-37d705b75e5b-kube-api-access-6nckb" (OuterVolumeSpecName: "kube-api-access-6nckb") pod "d8cb1811-603c-47eb-bcf0-37d705b75e5b" (UID: "d8cb1811-603c-47eb-bcf0-37d705b75e5b"). InnerVolumeSpecName "kube-api-access-6nckb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.947451 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d8cb1811-603c-47eb-bcf0-37d705b75e5b" (UID: "d8cb1811-603c-47eb-bcf0-37d705b75e5b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:42 crc kubenswrapper[4881]: I0126 13:02:42.992549 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d8cb1811-603c-47eb-bcf0-37d705b75e5b" (UID: "d8cb1811-603c-47eb-bcf0-37d705b75e5b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.010719 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8cb1811-603c-47eb-bcf0-37d705b75e5b" (UID: "d8cb1811-603c-47eb-bcf0-37d705b75e5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.024115 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nckb\" (UniqueName: \"kubernetes.io/projected/d8cb1811-603c-47eb-bcf0-37d705b75e5b-kube-api-access-6nckb\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.024149 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.024158 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.024167 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.024175 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8cb1811-603c-47eb-bcf0-37d705b75e5b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.024184 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.036777 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-config-data" (OuterVolumeSpecName: "config-data") pod "d8cb1811-603c-47eb-bcf0-37d705b75e5b" (UID: "d8cb1811-603c-47eb-bcf0-37d705b75e5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.126554 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cb1811-603c-47eb-bcf0-37d705b75e5b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.177079 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5b3bfa7-449d-4e9c-b83a-592d38765699","Type":"ContainerDied","Data":"81d18184fda84e1b9c92098a24ffde1adc1ea965c0c2da9575f0857cf626dc09"} Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.177087 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.177138 4881 scope.go:117] "RemoveContainer" containerID="d0132dfc94a6a42d1c87b2e769248a60ecf87cc575bb9edf3d60e8a031cac6a3" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.181860 4881 generic.go:334] "Generic (PLEG): container finished" podID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerID="5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441" exitCode=0 Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.182441 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.184667 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8cb1811-603c-47eb-bcf0-37d705b75e5b","Type":"ContainerDied","Data":"5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441"} Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.184754 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8cb1811-603c-47eb-bcf0-37d705b75e5b","Type":"ContainerDied","Data":"eb174cbdf5bc04b76c0cec2105b42550c064b6e575ffdf56ced4aeafffc7441a"} Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.230238 4881 scope.go:117] "RemoveContainer" containerID="5c0b92e62f31e71a37ca293bbb15a384ca043238dabe8a38fef35c98d418b0d5" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.268400 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.274258 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.274283 4881 scope.go:117] "RemoveContainer" containerID="428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.283939 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.309306 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.318019 4881 scope.go:117] "RemoveContainer" containerID="5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.319177 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:43 crc kubenswrapper[4881]: E0126 13:02:43.319780 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerName="nova-api-log" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.319803 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerName="nova-api-log" Jan 26 13:02:43 crc kubenswrapper[4881]: E0126 13:02:43.319827 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerName="nova-api-api" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.319836 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerName="nova-api-api" Jan 26 13:02:43 crc kubenswrapper[4881]: E0126 13:02:43.319856 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="sg-core" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.319869 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="sg-core" Jan 26 13:02:43 crc kubenswrapper[4881]: E0126 13:02:43.319885 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="ceilometer-notification-agent" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.319891 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="ceilometer-notification-agent" Jan 26 13:02:43 crc kubenswrapper[4881]: E0126 13:02:43.319912 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="ceilometer-central-agent" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.319918 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="ceilometer-central-agent" Jan 26 13:02:43 crc kubenswrapper[4881]: E0126 13:02:43.319931 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="proxy-httpd" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.319937 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="proxy-httpd" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.320102 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="proxy-httpd" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.320114 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerName="nova-api-log" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.320124 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="ceilometer-central-agent" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.320138 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="ceilometer-notification-agent" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.320150 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" containerName="nova-api-api" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.320157 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" containerName="sg-core" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.323284 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.326629 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.326850 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.327683 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.329324 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.331317 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.332726 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.334659 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.335909 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.339137 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.347376 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.360920 4881 scope.go:117] "RemoveContainer" containerID="5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.383970 4881 scope.go:117] "RemoveContainer" containerID="6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.404165 4881 scope.go:117] "RemoveContainer" containerID="428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba" Jan 26 13:02:43 crc kubenswrapper[4881]: E0126 13:02:43.404551 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba\": container with ID starting with 428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba not found: ID does not exist" containerID="428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.404582 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba"} err="failed to get container status \"428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba\": rpc error: code = NotFound desc = could not find container \"428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba\": container with ID starting with 428cfabcde9e727b3993e21dd89605f63467b70b0fa9084baedb3d1f8c3fedba not found: ID does not exist" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.404604 4881 scope.go:117] "RemoveContainer" containerID="5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856" Jan 26 13:02:43 crc kubenswrapper[4881]: E0126 13:02:43.404954 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856\": container with ID starting with 5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856 not found: ID does not exist" containerID="5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.404975 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856"} err="failed to get container status \"5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856\": rpc error: code = NotFound desc = could not find container \"5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856\": container with ID starting with 5a80f56ae068359d72b3ec1beebd6617250d5afb3ec9b8f79e8ebd9972cf0856 not found: ID does not exist" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.404986 4881 scope.go:117] "RemoveContainer" containerID="5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441" Jan 26 13:02:43 crc kubenswrapper[4881]: E0126 13:02:43.405337 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441\": container with ID starting with 5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441 not found: ID does not exist" containerID="5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.405358 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441"} err="failed to get container status \"5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441\": rpc error: code = NotFound desc = could not find container \"5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441\": container with ID starting with 5f300b39f0ff05defb93dd4f666be02d932ed7994db209e17f5a9ef7895e8441 not found: ID does not exist" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.405373 4881 scope.go:117] "RemoveContainer" containerID="6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc" Jan 26 13:02:43 crc kubenswrapper[4881]: E0126 13:02:43.405639 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc\": container with ID starting with 6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc not found: ID does not exist" containerID="6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.405682 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc"} err="failed to get container status \"6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc\": rpc error: code = NotFound desc = could not find container \"6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc\": container with ID starting with 6bf08a9c6abffe06d3b998626a9317cc9217b39d6eb0ce59ecfa36eb1ac944fc not found: ID does not exist" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.438299 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8sf2\" (UniqueName: \"kubernetes.io/projected/843eec84-8a03-49b4-beda-773a76cecdbb-kube-api-access-t8sf2\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.438972 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/843eec84-8a03-49b4-beda-773a76cecdbb-logs\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.439012 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzgx6\" (UniqueName: \"kubernetes.io/projected/a3c188b0-972c-46b1-bb59-edce2c6b4f54-kube-api-access-nzgx6\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.439039 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-scripts\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.439966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-public-tls-certs\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.440052 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.440114 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.440195 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3c188b0-972c-46b1-bb59-edce2c6b4f54-run-httpd\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.440226 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-config-data\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.440245 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-config-data\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.440324 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.440354 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.440378 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.440400 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3c188b0-972c-46b1-bb59-edce2c6b4f54-log-httpd\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544471 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544587 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544641 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3c188b0-972c-46b1-bb59-edce2c6b4f54-run-httpd\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544669 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-config-data\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544688 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-config-data\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544740 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544765 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544785 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544805 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3c188b0-972c-46b1-bb59-edce2c6b4f54-log-httpd\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544832 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8sf2\" (UniqueName: \"kubernetes.io/projected/843eec84-8a03-49b4-beda-773a76cecdbb-kube-api-access-t8sf2\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544874 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/843eec84-8a03-49b4-beda-773a76cecdbb-logs\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544920 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzgx6\" (UniqueName: \"kubernetes.io/projected/a3c188b0-972c-46b1-bb59-edce2c6b4f54-kube-api-access-nzgx6\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.544939 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-scripts\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.545036 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-public-tls-certs\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.545585 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3c188b0-972c-46b1-bb59-edce2c6b4f54-run-httpd\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.546006 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/843eec84-8a03-49b4-beda-773a76cecdbb-logs\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.546971 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3c188b0-972c-46b1-bb59-edce2c6b4f54-log-httpd\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.549958 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-scripts\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.550461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.551177 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.552087 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.552181 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-config-data\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.552598 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.557390 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-public-tls-certs\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.558689 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-config-data\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.570759 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8sf2\" (UniqueName: \"kubernetes.io/projected/843eec84-8a03-49b4-beda-773a76cecdbb-kube-api-access-t8sf2\") pod \"nova-api-0\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " pod="openstack/nova-api-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.571094 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c188b0-972c-46b1-bb59-edce2c6b4f54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.572286 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzgx6\" (UniqueName: \"kubernetes.io/projected/a3c188b0-972c-46b1-bb59-edce2c6b4f54-kube-api-access-nzgx6\") pod \"ceilometer-0\" (UID: \"a3c188b0-972c-46b1-bb59-edce2c6b4f54\") " pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.652569 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:02:43 crc kubenswrapper[4881]: I0126 13:02:43.663247 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:44.096083 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8cb1811-603c-47eb-bcf0-37d705b75e5b" path="/var/lib/kubelet/pods/d8cb1811-603c-47eb-bcf0-37d705b75e5b/volumes" Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:44.097030 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5b3bfa7-449d-4e9c-b83a-592d38765699" path="/var/lib/kubelet/pods/e5b3bfa7-449d-4e9c-b83a-592d38765699/volumes" Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:44.173477 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:02:45 crc kubenswrapper[4881]: W0126 13:02:44.182374 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3c188b0_972c_46b1_bb59_edce2c6b4f54.slice/crio-29872594f8067ad812c5a8854cb07833b5b14d4553379dbee6d7a1443bb51749 WatchSource:0}: Error finding container 29872594f8067ad812c5a8854cb07833b5b14d4553379dbee6d7a1443bb51749: Status 404 returned error can't find the container with id 29872594f8067ad812c5a8854cb07833b5b14d4553379dbee6d7a1443bb51749 Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:44.200464 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3c188b0-972c-46b1-bb59-edce2c6b4f54","Type":"ContainerStarted","Data":"29872594f8067ad812c5a8854cb07833b5b14d4553379dbee6d7a1443bb51749"} Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:44.272207 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:45 crc kubenswrapper[4881]: W0126 13:02:44.272999 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod843eec84_8a03_49b4_beda_773a76cecdbb.slice/crio-962c07c2db49f9789e1f6e722fc4f308e383c0679d1c35060ab9fe2e416766b0 WatchSource:0}: Error finding container 962c07c2db49f9789e1f6e722fc4f308e383c0679d1c35060ab9fe2e416766b0: Status 404 returned error can't find the container with id 962c07c2db49f9789e1f6e722fc4f308e383c0679d1c35060ab9fe2e416766b0 Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:45.215714 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"843eec84-8a03-49b4-beda-773a76cecdbb","Type":"ContainerStarted","Data":"7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d"} Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:45.215956 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"843eec84-8a03-49b4-beda-773a76cecdbb","Type":"ContainerStarted","Data":"0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6"} Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:45.215970 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"843eec84-8a03-49b4-beda-773a76cecdbb","Type":"ContainerStarted","Data":"962c07c2db49f9789e1f6e722fc4f308e383c0679d1c35060ab9fe2e416766b0"} Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:45.231035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3c188b0-972c-46b1-bb59-edce2c6b4f54","Type":"ContainerStarted","Data":"9d4deabfed6e510589f9edb920a31ab3fe58d8d3a6a0c82eb0f0dd171599a47f"} Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:45.231070 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3c188b0-972c-46b1-bb59-edce2c6b4f54","Type":"ContainerStarted","Data":"80548db204956ccede623fb5537fbfde72ed2102acc1134cc9169e5815411168"} Jan 26 13:02:45 crc kubenswrapper[4881]: I0126 13:02:45.248930 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.248913015 podStartE2EDuration="2.248913015s" podCreationTimestamp="2026-01-26 13:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:45.244730624 +0000 UTC m=+1637.724040650" watchObservedRunningTime="2026-01-26 13:02:45.248913015 +0000 UTC m=+1637.728223041" Jan 26 13:02:46 crc kubenswrapper[4881]: I0126 13:02:46.247810 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3c188b0-972c-46b1-bb59-edce2c6b4f54","Type":"ContainerStarted","Data":"622cc4a982ee48da45fdaf84e31325e00ad2ae9e7083b50ba6b630016a658dea"} Jan 26 13:02:47 crc kubenswrapper[4881]: I0126 13:02:47.553426 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 13:02:47 crc kubenswrapper[4881]: I0126 13:02:47.813715 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:47 crc kubenswrapper[4881]: I0126 13:02:47.843077 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.283963 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3c188b0-972c-46b1-bb59-edce2c6b4f54","Type":"ContainerStarted","Data":"7d641b55a13a6b1a65d85820aacd00a5a97d876b8e49bfcd7f91971d420e5e44"} Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.284813 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.306429 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.472337562 podStartE2EDuration="5.306409729s" podCreationTimestamp="2026-01-26 13:02:43 +0000 UTC" firstStartedPulling="2026-01-26 13:02:44.185830884 +0000 UTC m=+1636.665140910" lastFinishedPulling="2026-01-26 13:02:47.019903051 +0000 UTC m=+1639.499213077" observedRunningTime="2026-01-26 13:02:48.304914193 +0000 UTC m=+1640.784224259" watchObservedRunningTime="2026-01-26 13:02:48.306409729 +0000 UTC m=+1640.785719775" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.314481 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.474318 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-m65pm"] Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.475867 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.478309 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.482617 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.494926 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-m65pm"] Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.542490 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.559500 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmt9n\" (UniqueName: \"kubernetes.io/projected/e7b77227-f137-4a0e-84bc-383c0facf6b9-kube-api-access-zmt9n\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.559566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-config-data\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.559657 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.559700 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-scripts\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.613501 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7777964479-66ccm"] Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.615653 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7777964479-66ccm" podUID="24649036-906f-4ba5-a838-aa36ccee3760" containerName="dnsmasq-dns" containerID="cri-o://507be02147fbf816d9a1b9df90f58821dfb79a7cbbfe328b574f08bb0deea846" gracePeriod=10 Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.662082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.662216 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-scripts\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.662331 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmt9n\" (UniqueName: \"kubernetes.io/projected/e7b77227-f137-4a0e-84bc-383c0facf6b9-kube-api-access-zmt9n\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.662367 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-config-data\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.673171 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.682258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-config-data\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.682688 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmt9n\" (UniqueName: \"kubernetes.io/projected/e7b77227-f137-4a0e-84bc-383c0facf6b9-kube-api-access-zmt9n\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.683429 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-scripts\") pod \"nova-cell1-cell-mapping-m65pm\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:48 crc kubenswrapper[4881]: I0126 13:02:48.796820 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:49 crc kubenswrapper[4881]: W0126 13:02:49.318425 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7b77227_f137_4a0e_84bc_383c0facf6b9.slice/crio-08b69bef506356a064428f143931fda7602e9699414246645cc7942a94d0d8c6 WatchSource:0}: Error finding container 08b69bef506356a064428f143931fda7602e9699414246645cc7942a94d0d8c6: Status 404 returned error can't find the container with id 08b69bef506356a064428f143931fda7602e9699414246645cc7942a94d0d8c6 Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.332804 4881 generic.go:334] "Generic (PLEG): container finished" podID="24649036-906f-4ba5-a838-aa36ccee3760" containerID="507be02147fbf816d9a1b9df90f58821dfb79a7cbbfe328b574f08bb0deea846" exitCode=0 Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.333965 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-66ccm" event={"ID":"24649036-906f-4ba5-a838-aa36ccee3760","Type":"ContainerDied","Data":"507be02147fbf816d9a1b9df90f58821dfb79a7cbbfe328b574f08bb0deea846"} Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.339236 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-m65pm"] Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.425216 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.591784 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-svc\") pod \"24649036-906f-4ba5-a838-aa36ccee3760\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.591850 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-nb\") pod \"24649036-906f-4ba5-a838-aa36ccee3760\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.591886 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-sb\") pod \"24649036-906f-4ba5-a838-aa36ccee3760\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.591902 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-swift-storage-0\") pod \"24649036-906f-4ba5-a838-aa36ccee3760\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.591961 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-config\") pod \"24649036-906f-4ba5-a838-aa36ccee3760\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.592086 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xn7g\" (UniqueName: \"kubernetes.io/projected/24649036-906f-4ba5-a838-aa36ccee3760-kube-api-access-2xn7g\") pod \"24649036-906f-4ba5-a838-aa36ccee3760\" (UID: \"24649036-906f-4ba5-a838-aa36ccee3760\") " Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.598500 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24649036-906f-4ba5-a838-aa36ccee3760-kube-api-access-2xn7g" (OuterVolumeSpecName: "kube-api-access-2xn7g") pod "24649036-906f-4ba5-a838-aa36ccee3760" (UID: "24649036-906f-4ba5-a838-aa36ccee3760"). InnerVolumeSpecName "kube-api-access-2xn7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.654369 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-config" (OuterVolumeSpecName: "config") pod "24649036-906f-4ba5-a838-aa36ccee3760" (UID: "24649036-906f-4ba5-a838-aa36ccee3760"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.655853 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24649036-906f-4ba5-a838-aa36ccee3760" (UID: "24649036-906f-4ba5-a838-aa36ccee3760"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.655883 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24649036-906f-4ba5-a838-aa36ccee3760" (UID: "24649036-906f-4ba5-a838-aa36ccee3760"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.657392 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24649036-906f-4ba5-a838-aa36ccee3760" (UID: "24649036-906f-4ba5-a838-aa36ccee3760"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.664874 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "24649036-906f-4ba5-a838-aa36ccee3760" (UID: "24649036-906f-4ba5-a838-aa36ccee3760"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.694487 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.694547 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.694562 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.694575 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.694587 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24649036-906f-4ba5-a838-aa36ccee3760-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:49 crc kubenswrapper[4881]: I0126 13:02:49.694596 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xn7g\" (UniqueName: \"kubernetes.io/projected/24649036-906f-4ba5-a838-aa36ccee3760-kube-api-access-2xn7g\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.184020 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v8wls"] Jan 26 13:02:50 crc kubenswrapper[4881]: E0126 13:02:50.185761 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24649036-906f-4ba5-a838-aa36ccee3760" containerName="dnsmasq-dns" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.185869 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="24649036-906f-4ba5-a838-aa36ccee3760" containerName="dnsmasq-dns" Jan 26 13:02:50 crc kubenswrapper[4881]: E0126 13:02:50.185949 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24649036-906f-4ba5-a838-aa36ccee3760" containerName="init" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.186011 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="24649036-906f-4ba5-a838-aa36ccee3760" containerName="init" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.186322 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="24649036-906f-4ba5-a838-aa36ccee3760" containerName="dnsmasq-dns" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.193824 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.194893 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8wls"] Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.306181 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-utilities\") pod \"redhat-operators-v8wls\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.306281 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx24v\" (UniqueName: \"kubernetes.io/projected/0b52903d-117a-4c80-a478-bfb576c26c00-kube-api-access-rx24v\") pod \"redhat-operators-v8wls\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.306458 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-catalog-content\") pod \"redhat-operators-v8wls\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.344913 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7777964479-66ccm" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.344925 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-66ccm" event={"ID":"24649036-906f-4ba5-a838-aa36ccee3760","Type":"ContainerDied","Data":"8c5d391689121d96680b651348f63a74b6069d2bf85f8d8db5693a05935736fd"} Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.344992 4881 scope.go:117] "RemoveContainer" containerID="507be02147fbf816d9a1b9df90f58821dfb79a7cbbfe328b574f08bb0deea846" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.347793 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-m65pm" event={"ID":"e7b77227-f137-4a0e-84bc-383c0facf6b9","Type":"ContainerStarted","Data":"62d6b06cdc7085c02cfb8f7661419cca91af57b207f114cdb79d2ad156a5ed39"} Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.347827 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-m65pm" event={"ID":"e7b77227-f137-4a0e-84bc-383c0facf6b9","Type":"ContainerStarted","Data":"08b69bef506356a064428f143931fda7602e9699414246645cc7942a94d0d8c6"} Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.377845 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7777964479-66ccm"] Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.388624 4881 scope.go:117] "RemoveContainer" containerID="ed3bd61a776ae3733b6bef4de3b7839268171c097cb295b181919f0a43c4def8" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.392382 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7777964479-66ccm"] Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.400305 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-m65pm" podStartSLOduration=2.400282327 podStartE2EDuration="2.400282327s" podCreationTimestamp="2026-01-26 13:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:02:50.385507279 +0000 UTC m=+1642.864817335" watchObservedRunningTime="2026-01-26 13:02:50.400282327 +0000 UTC m=+1642.879592363" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.409740 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-utilities\") pod \"redhat-operators-v8wls\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.409806 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx24v\" (UniqueName: \"kubernetes.io/projected/0b52903d-117a-4c80-a478-bfb576c26c00-kube-api-access-rx24v\") pod \"redhat-operators-v8wls\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.409910 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-catalog-content\") pod \"redhat-operators-v8wls\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.410549 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-catalog-content\") pod \"redhat-operators-v8wls\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.410793 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-utilities\") pod \"redhat-operators-v8wls\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.439973 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx24v\" (UniqueName: \"kubernetes.io/projected/0b52903d-117a-4c80-a478-bfb576c26c00-kube-api-access-rx24v\") pod \"redhat-operators-v8wls\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.520467 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:02:50 crc kubenswrapper[4881]: I0126 13:02:50.986997 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8wls"] Jan 26 13:02:51 crc kubenswrapper[4881]: W0126 13:02:51.004986 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b52903d_117a_4c80_a478_bfb576c26c00.slice/crio-6af7701417693b31a0d01f15f75bab6d933d83c3e2a1d504419da6d0dfd6b26a WatchSource:0}: Error finding container 6af7701417693b31a0d01f15f75bab6d933d83c3e2a1d504419da6d0dfd6b26a: Status 404 returned error can't find the container with id 6af7701417693b31a0d01f15f75bab6d933d83c3e2a1d504419da6d0dfd6b26a Jan 26 13:02:51 crc kubenswrapper[4881]: I0126 13:02:51.361038 4881 generic.go:334] "Generic (PLEG): container finished" podID="0b52903d-117a-4c80-a478-bfb576c26c00" containerID="2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b" exitCode=0 Jan 26 13:02:51 crc kubenswrapper[4881]: I0126 13:02:51.361158 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8wls" event={"ID":"0b52903d-117a-4c80-a478-bfb576c26c00","Type":"ContainerDied","Data":"2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b"} Jan 26 13:02:51 crc kubenswrapper[4881]: I0126 13:02:51.361190 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8wls" event={"ID":"0b52903d-117a-4c80-a478-bfb576c26c00","Type":"ContainerStarted","Data":"6af7701417693b31a0d01f15f75bab6d933d83c3e2a1d504419da6d0dfd6b26a"} Jan 26 13:02:52 crc kubenswrapper[4881]: I0126 13:02:52.095219 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24649036-906f-4ba5-a838-aa36ccee3760" path="/var/lib/kubelet/pods/24649036-906f-4ba5-a838-aa36ccee3760/volumes" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.361641 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lj92n"] Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.363809 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.381210 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj92n"] Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.391575 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8wls" event={"ID":"0b52903d-117a-4c80-a478-bfb576c26c00","Type":"ContainerStarted","Data":"a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293"} Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.466988 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b57v5\" (UniqueName: \"kubernetes.io/projected/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-kube-api-access-b57v5\") pod \"redhat-marketplace-lj92n\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.467064 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-catalog-content\") pod \"redhat-marketplace-lj92n\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.467107 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-utilities\") pod \"redhat-marketplace-lj92n\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.569308 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b57v5\" (UniqueName: \"kubernetes.io/projected/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-kube-api-access-b57v5\") pod \"redhat-marketplace-lj92n\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.569760 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-catalog-content\") pod \"redhat-marketplace-lj92n\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.570026 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-utilities\") pod \"redhat-marketplace-lj92n\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.570183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-catalog-content\") pod \"redhat-marketplace-lj92n\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.570555 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-utilities\") pod \"redhat-marketplace-lj92n\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.609678 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b57v5\" (UniqueName: \"kubernetes.io/projected/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-kube-api-access-b57v5\") pod \"redhat-marketplace-lj92n\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.664206 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.664966 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:02:53 crc kubenswrapper[4881]: I0126 13:02:53.689303 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:02:54 crc kubenswrapper[4881]: I0126 13:02:54.210325 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj92n"] Jan 26 13:02:54 crc kubenswrapper[4881]: W0126 13:02:54.214761 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf890218f_7c5f_4227_9f47_68ebd7d7f7ae.slice/crio-8adc55f0f8884eb24f41ccbb0c0009402e1b988ff1b375b5773cc64a08d3d288 WatchSource:0}: Error finding container 8adc55f0f8884eb24f41ccbb0c0009402e1b988ff1b375b5773cc64a08d3d288: Status 404 returned error can't find the container with id 8adc55f0f8884eb24f41ccbb0c0009402e1b988ff1b375b5773cc64a08d3d288 Jan 26 13:02:54 crc kubenswrapper[4881]: I0126 13:02:54.402732 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj92n" event={"ID":"f890218f-7c5f-4227-9f47-68ebd7d7f7ae","Type":"ContainerStarted","Data":"8adc55f0f8884eb24f41ccbb0c0009402e1b988ff1b375b5773cc64a08d3d288"} Jan 26 13:02:54 crc kubenswrapper[4881]: I0126 13:02:54.678817 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:02:54 crc kubenswrapper[4881]: I0126 13:02:54.678771 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:02:54 crc kubenswrapper[4881]: I0126 13:02:54.790060 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:02:54 crc kubenswrapper[4881]: I0126 13:02:54.790162 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:02:55 crc kubenswrapper[4881]: I0126 13:02:55.413163 4881 generic.go:334] "Generic (PLEG): container finished" podID="0b52903d-117a-4c80-a478-bfb576c26c00" containerID="a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293" exitCode=0 Jan 26 13:02:55 crc kubenswrapper[4881]: I0126 13:02:55.413243 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8wls" event={"ID":"0b52903d-117a-4c80-a478-bfb576c26c00","Type":"ContainerDied","Data":"a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293"} Jan 26 13:02:55 crc kubenswrapper[4881]: I0126 13:02:55.416001 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj92n" event={"ID":"f890218f-7c5f-4227-9f47-68ebd7d7f7ae","Type":"ContainerStarted","Data":"f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20"} Jan 26 13:02:56 crc kubenswrapper[4881]: I0126 13:02:56.428827 4881 generic.go:334] "Generic (PLEG): container finished" podID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerID="f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20" exitCode=0 Jan 26 13:02:56 crc kubenswrapper[4881]: I0126 13:02:56.428982 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj92n" event={"ID":"f890218f-7c5f-4227-9f47-68ebd7d7f7ae","Type":"ContainerDied","Data":"f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20"} Jan 26 13:02:57 crc kubenswrapper[4881]: I0126 13:02:57.445068 4881 generic.go:334] "Generic (PLEG): container finished" podID="e7b77227-f137-4a0e-84bc-383c0facf6b9" containerID="62d6b06cdc7085c02cfb8f7661419cca91af57b207f114cdb79d2ad156a5ed39" exitCode=0 Jan 26 13:02:57 crc kubenswrapper[4881]: I0126 13:02:57.445435 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-m65pm" event={"ID":"e7b77227-f137-4a0e-84bc-383c0facf6b9","Type":"ContainerDied","Data":"62d6b06cdc7085c02cfb8f7661419cca91af57b207f114cdb79d2ad156a5ed39"} Jan 26 13:02:57 crc kubenswrapper[4881]: I0126 13:02:57.454983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8wls" event={"ID":"0b52903d-117a-4c80-a478-bfb576c26c00","Type":"ContainerStarted","Data":"9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb"} Jan 26 13:02:57 crc kubenswrapper[4881]: I0126 13:02:57.467481 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj92n" event={"ID":"f890218f-7c5f-4227-9f47-68ebd7d7f7ae","Type":"ContainerStarted","Data":"2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7"} Jan 26 13:02:57 crc kubenswrapper[4881]: I0126 13:02:57.484624 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v8wls" podStartSLOduration=2.350765132 podStartE2EDuration="7.484608552s" podCreationTimestamp="2026-01-26 13:02:50 +0000 UTC" firstStartedPulling="2026-01-26 13:02:51.367207303 +0000 UTC m=+1643.846517339" lastFinishedPulling="2026-01-26 13:02:56.501050723 +0000 UTC m=+1648.980360759" observedRunningTime="2026-01-26 13:02:57.482220514 +0000 UTC m=+1649.961530620" watchObservedRunningTime="2026-01-26 13:02:57.484608552 +0000 UTC m=+1649.963918568" Jan 26 13:02:58 crc kubenswrapper[4881]: I0126 13:02:58.485641 4881 generic.go:334] "Generic (PLEG): container finished" podID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerID="2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7" exitCode=0 Jan 26 13:02:58 crc kubenswrapper[4881]: I0126 13:02:58.485719 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj92n" event={"ID":"f890218f-7c5f-4227-9f47-68ebd7d7f7ae","Type":"ContainerDied","Data":"2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7"} Jan 26 13:02:58 crc kubenswrapper[4881]: I0126 13:02:58.960202 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.080146 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmt9n\" (UniqueName: \"kubernetes.io/projected/e7b77227-f137-4a0e-84bc-383c0facf6b9-kube-api-access-zmt9n\") pod \"e7b77227-f137-4a0e-84bc-383c0facf6b9\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.080233 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-combined-ca-bundle\") pod \"e7b77227-f137-4a0e-84bc-383c0facf6b9\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.080338 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-config-data\") pod \"e7b77227-f137-4a0e-84bc-383c0facf6b9\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.080416 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-scripts\") pod \"e7b77227-f137-4a0e-84bc-383c0facf6b9\" (UID: \"e7b77227-f137-4a0e-84bc-383c0facf6b9\") " Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.099969 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-scripts" (OuterVolumeSpecName: "scripts") pod "e7b77227-f137-4a0e-84bc-383c0facf6b9" (UID: "e7b77227-f137-4a0e-84bc-383c0facf6b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.100016 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b77227-f137-4a0e-84bc-383c0facf6b9-kube-api-access-zmt9n" (OuterVolumeSpecName: "kube-api-access-zmt9n") pod "e7b77227-f137-4a0e-84bc-383c0facf6b9" (UID: "e7b77227-f137-4a0e-84bc-383c0facf6b9"). InnerVolumeSpecName "kube-api-access-zmt9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.112884 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7b77227-f137-4a0e-84bc-383c0facf6b9" (UID: "e7b77227-f137-4a0e-84bc-383c0facf6b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.118444 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-config-data" (OuterVolumeSpecName: "config-data") pod "e7b77227-f137-4a0e-84bc-383c0facf6b9" (UID: "e7b77227-f137-4a0e-84bc-383c0facf6b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.183295 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmt9n\" (UniqueName: \"kubernetes.io/projected/e7b77227-f137-4a0e-84bc-383c0facf6b9-kube-api-access-zmt9n\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.183325 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.183335 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.183343 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b77227-f137-4a0e-84bc-383c0facf6b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.502964 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-m65pm" event={"ID":"e7b77227-f137-4a0e-84bc-383c0facf6b9","Type":"ContainerDied","Data":"08b69bef506356a064428f143931fda7602e9699414246645cc7942a94d0d8c6"} Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.503014 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b69bef506356a064428f143931fda7602e9699414246645cc7942a94d0d8c6" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.503040 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-m65pm" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.521984 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj92n" event={"ID":"f890218f-7c5f-4227-9f47-68ebd7d7f7ae","Type":"ContainerStarted","Data":"ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a"} Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.564662 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lj92n" podStartSLOduration=4.40841502 podStartE2EDuration="6.564629665s" podCreationTimestamp="2026-01-26 13:02:53 +0000 UTC" firstStartedPulling="2026-01-26 13:02:56.497971679 +0000 UTC m=+1648.977281745" lastFinishedPulling="2026-01-26 13:02:58.654186324 +0000 UTC m=+1651.133496390" observedRunningTime="2026-01-26 13:02:59.557982114 +0000 UTC m=+1652.037292210" watchObservedRunningTime="2026-01-26 13:02:59.564629665 +0000 UTC m=+1652.043939721" Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.673592 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.673818 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" containerName="nova-api-log" containerID="cri-o://0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6" gracePeriod=30 Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.674827 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" containerName="nova-api-api" containerID="cri-o://7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d" gracePeriod=30 Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.723844 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.724048 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4163a60f-62c2-4edb-b675-b25e408ca3bd" containerName="nova-scheduler-scheduler" containerID="cri-o://db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9" gracePeriod=30 Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.750906 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.751127 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-log" containerID="cri-o://e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d" gracePeriod=30 Jan 26 13:02:59 crc kubenswrapper[4881]: I0126 13:02:59.751257 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-metadata" containerID="cri-o://08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27" gracePeriod=30 Jan 26 13:03:00 crc kubenswrapper[4881]: E0126 13:03:00.211385 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 13:03:00 crc kubenswrapper[4881]: E0126 13:03:00.213137 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 13:03:00 crc kubenswrapper[4881]: E0126 13:03:00.226643 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 13:03:00 crc kubenswrapper[4881]: E0126 13:03:00.226708 4881 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4163a60f-62c2-4edb-b675-b25e408ca3bd" containerName="nova-scheduler-scheduler" Jan 26 13:03:00 crc kubenswrapper[4881]: I0126 13:03:00.521866 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:03:00 crc kubenswrapper[4881]: I0126 13:03:00.523174 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:03:00 crc kubenswrapper[4881]: I0126 13:03:00.535483 4881 generic.go:334] "Generic (PLEG): container finished" podID="563a0ca1-a6c3-4089-88a7-f23423418751" containerID="e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d" exitCode=143 Jan 26 13:03:00 crc kubenswrapper[4881]: I0126 13:03:00.535669 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"563a0ca1-a6c3-4089-88a7-f23423418751","Type":"ContainerDied","Data":"e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d"} Jan 26 13:03:00 crc kubenswrapper[4881]: I0126 13:03:00.539233 4881 generic.go:334] "Generic (PLEG): container finished" podID="843eec84-8a03-49b4-beda-773a76cecdbb" containerID="0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6" exitCode=143 Jan 26 13:03:00 crc kubenswrapper[4881]: I0126 13:03:00.539345 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"843eec84-8a03-49b4-beda-773a76cecdbb","Type":"ContainerDied","Data":"0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6"} Jan 26 13:03:00 crc kubenswrapper[4881]: I0126 13:03:00.630272 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:50264->10.217.0.217:8775: read: connection reset by peer" Jan 26 13:03:00 crc kubenswrapper[4881]: I0126 13:03:00.630304 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:50266->10.217.0.217:8775: read: connection reset by peer" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.067420 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.073189 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.132497 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgw4j\" (UniqueName: \"kubernetes.io/projected/563a0ca1-a6c3-4089-88a7-f23423418751-kube-api-access-fgw4j\") pod \"563a0ca1-a6c3-4089-88a7-f23423418751\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.132574 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-config-data\") pod \"563a0ca1-a6c3-4089-88a7-f23423418751\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.132593 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-combined-ca-bundle\") pod \"843eec84-8a03-49b4-beda-773a76cecdbb\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.132665 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/843eec84-8a03-49b4-beda-773a76cecdbb-logs\") pod \"843eec84-8a03-49b4-beda-773a76cecdbb\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.132725 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-public-tls-certs\") pod \"843eec84-8a03-49b4-beda-773a76cecdbb\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.132753 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8sf2\" (UniqueName: \"kubernetes.io/projected/843eec84-8a03-49b4-beda-773a76cecdbb-kube-api-access-t8sf2\") pod \"843eec84-8a03-49b4-beda-773a76cecdbb\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.132850 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/563a0ca1-a6c3-4089-88a7-f23423418751-logs\") pod \"563a0ca1-a6c3-4089-88a7-f23423418751\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.132876 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-config-data\") pod \"843eec84-8a03-49b4-beda-773a76cecdbb\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.132941 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-nova-metadata-tls-certs\") pod \"563a0ca1-a6c3-4089-88a7-f23423418751\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.132991 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-combined-ca-bundle\") pod \"563a0ca1-a6c3-4089-88a7-f23423418751\" (UID: \"563a0ca1-a6c3-4089-88a7-f23423418751\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.133041 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-internal-tls-certs\") pod \"843eec84-8a03-49b4-beda-773a76cecdbb\" (UID: \"843eec84-8a03-49b4-beda-773a76cecdbb\") " Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.133197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/843eec84-8a03-49b4-beda-773a76cecdbb-logs" (OuterVolumeSpecName: "logs") pod "843eec84-8a03-49b4-beda-773a76cecdbb" (UID: "843eec84-8a03-49b4-beda-773a76cecdbb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.133544 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/563a0ca1-a6c3-4089-88a7-f23423418751-logs" (OuterVolumeSpecName: "logs") pod "563a0ca1-a6c3-4089-88a7-f23423418751" (UID: "563a0ca1-a6c3-4089-88a7-f23423418751"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.133560 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/843eec84-8a03-49b4-beda-773a76cecdbb-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.140403 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843eec84-8a03-49b4-beda-773a76cecdbb-kube-api-access-t8sf2" (OuterVolumeSpecName: "kube-api-access-t8sf2") pod "843eec84-8a03-49b4-beda-773a76cecdbb" (UID: "843eec84-8a03-49b4-beda-773a76cecdbb"). InnerVolumeSpecName "kube-api-access-t8sf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.168605 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563a0ca1-a6c3-4089-88a7-f23423418751-kube-api-access-fgw4j" (OuterVolumeSpecName: "kube-api-access-fgw4j") pod "563a0ca1-a6c3-4089-88a7-f23423418751" (UID: "563a0ca1-a6c3-4089-88a7-f23423418751"). InnerVolumeSpecName "kube-api-access-fgw4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.212747 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-config-data" (OuterVolumeSpecName: "config-data") pod "563a0ca1-a6c3-4089-88a7-f23423418751" (UID: "563a0ca1-a6c3-4089-88a7-f23423418751"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.217800 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "843eec84-8a03-49b4-beda-773a76cecdbb" (UID: "843eec84-8a03-49b4-beda-773a76cecdbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.229220 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "563a0ca1-a6c3-4089-88a7-f23423418751" (UID: "563a0ca1-a6c3-4089-88a7-f23423418751"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.242007 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/563a0ca1-a6c3-4089-88a7-f23423418751-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.242052 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.242066 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgw4j\" (UniqueName: \"kubernetes.io/projected/563a0ca1-a6c3-4089-88a7-f23423418751-kube-api-access-fgw4j\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.242078 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.242090 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.242100 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8sf2\" (UniqueName: \"kubernetes.io/projected/843eec84-8a03-49b4-beda-773a76cecdbb-kube-api-access-t8sf2\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.252701 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "563a0ca1-a6c3-4089-88a7-f23423418751" (UID: "563a0ca1-a6c3-4089-88a7-f23423418751"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.254643 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "843eec84-8a03-49b4-beda-773a76cecdbb" (UID: "843eec84-8a03-49b4-beda-773a76cecdbb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.273113 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-config-data" (OuterVolumeSpecName: "config-data") pod "843eec84-8a03-49b4-beda-773a76cecdbb" (UID: "843eec84-8a03-49b4-beda-773a76cecdbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.276568 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "843eec84-8a03-49b4-beda-773a76cecdbb" (UID: "843eec84-8a03-49b4-beda-773a76cecdbb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.344403 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.344437 4881 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/563a0ca1-a6c3-4089-88a7-f23423418751-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.344449 4881 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.344459 4881 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/843eec84-8a03-49b4-beda-773a76cecdbb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.567060 4881 generic.go:334] "Generic (PLEG): container finished" podID="563a0ca1-a6c3-4089-88a7-f23423418751" containerID="08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27" exitCode=0 Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.567181 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.567193 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"563a0ca1-a6c3-4089-88a7-f23423418751","Type":"ContainerDied","Data":"08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27"} Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.567270 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"563a0ca1-a6c3-4089-88a7-f23423418751","Type":"ContainerDied","Data":"b2f47d2ce0b4b2ae93bf305936b7af116e466e17637c413cade9835d444ff90d"} Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.567322 4881 scope.go:117] "RemoveContainer" containerID="08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.572597 4881 generic.go:334] "Generic (PLEG): container finished" podID="843eec84-8a03-49b4-beda-773a76cecdbb" containerID="7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d" exitCode=0 Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.573140 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.574025 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"843eec84-8a03-49b4-beda-773a76cecdbb","Type":"ContainerDied","Data":"7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d"} Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.574076 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"843eec84-8a03-49b4-beda-773a76cecdbb","Type":"ContainerDied","Data":"962c07c2db49f9789e1f6e722fc4f308e383c0679d1c35060ab9fe2e416766b0"} Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.581294 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v8wls" podUID="0b52903d-117a-4c80-a478-bfb576c26c00" containerName="registry-server" probeResult="failure" output=< Jan 26 13:03:01 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 13:03:01 crc kubenswrapper[4881]: > Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.621095 4881 scope.go:117] "RemoveContainer" containerID="e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.652066 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.652803 4881 scope.go:117] "RemoveContainer" containerID="08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27" Jan 26 13:03:01 crc kubenswrapper[4881]: E0126 13:03:01.654690 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27\": container with ID starting with 08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27 not found: ID does not exist" containerID="08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.654728 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27"} err="failed to get container status \"08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27\": rpc error: code = NotFound desc = could not find container \"08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27\": container with ID starting with 08379d1db631e6433d57b4018a96dfced7f0917434da82daad32ca9a42fa3c27 not found: ID does not exist" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.654753 4881 scope.go:117] "RemoveContainer" containerID="e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d" Jan 26 13:03:01 crc kubenswrapper[4881]: E0126 13:03:01.658112 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d\": container with ID starting with e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d not found: ID does not exist" containerID="e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.658143 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d"} err="failed to get container status \"e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d\": rpc error: code = NotFound desc = could not find container \"e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d\": container with ID starting with e21f7e0ddebc87b2792f359de3eab8b2d9c4b0405c614c1f3c0335babfdb299d not found: ID does not exist" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.658161 4881 scope.go:117] "RemoveContainer" containerID="7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.680784 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.706819 4881 scope.go:117] "RemoveContainer" containerID="0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.721899 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.740009 4881 scope.go:117] "RemoveContainer" containerID="7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d" Jan 26 13:03:01 crc kubenswrapper[4881]: E0126 13:03:01.741502 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d\": container with ID starting with 7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d not found: ID does not exist" containerID="7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.741561 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d"} err="failed to get container status \"7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d\": rpc error: code = NotFound desc = could not find container \"7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d\": container with ID starting with 7e9039fc9ead27df963be9cf51e3499754654dd3480af70a00a4b326270d8d1d not found: ID does not exist" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.741582 4881 scope.go:117] "RemoveContainer" containerID="0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6" Jan 26 13:03:01 crc kubenswrapper[4881]: E0126 13:03:01.741997 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6\": container with ID starting with 0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6 not found: ID does not exist" containerID="0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.742067 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6"} err="failed to get container status \"0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6\": rpc error: code = NotFound desc = could not find container \"0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6\": container with ID starting with 0c743d52268a9984f5f5877c141d19fb81c09fd844112944723ca69fda2b26d6 not found: ID does not exist" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.747341 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.778830 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:03:01 crc kubenswrapper[4881]: E0126 13:03:01.779424 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b77227-f137-4a0e-84bc-383c0facf6b9" containerName="nova-manage" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.782102 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b77227-f137-4a0e-84bc-383c0facf6b9" containerName="nova-manage" Jan 26 13:03:01 crc kubenswrapper[4881]: E0126 13:03:01.782179 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" containerName="nova-api-api" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.782226 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" containerName="nova-api-api" Jan 26 13:03:01 crc kubenswrapper[4881]: E0126 13:03:01.782294 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-metadata" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.782338 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-metadata" Jan 26 13:03:01 crc kubenswrapper[4881]: E0126 13:03:01.782438 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" containerName="nova-api-log" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.782496 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" containerName="nova-api-log" Jan 26 13:03:01 crc kubenswrapper[4881]: E0126 13:03:01.782624 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-log" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.782688 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-log" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.782969 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" containerName="nova-api-api" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.783051 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b77227-f137-4a0e-84bc-383c0facf6b9" containerName="nova-manage" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.783133 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-log" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.783206 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" containerName="nova-api-log" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.783305 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" containerName="nova-metadata-metadata" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.785077 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.787654 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.787996 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.789484 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.791267 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.792608 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.792737 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.793880 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.805079 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.820685 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861309 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861389 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861415 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-logs\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxpsm\" (UniqueName: \"kubernetes.io/projected/ee44b824-a50b-4355-ab08-09d831323258-kube-api-access-nxpsm\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861476 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx4dc\" (UniqueName: \"kubernetes.io/projected/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-kube-api-access-cx4dc\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861503 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861542 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-config-data\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861666 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee44b824-a50b-4355-ab08-09d831323258-logs\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861708 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861736 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.861781 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-config-data\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.963735 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee44b824-a50b-4355-ab08-09d831323258-logs\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.963793 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.963997 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.964027 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-config-data\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.964060 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.964091 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.964108 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-logs\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.964126 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxpsm\" (UniqueName: \"kubernetes.io/projected/ee44b824-a50b-4355-ab08-09d831323258-kube-api-access-nxpsm\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.964151 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4dc\" (UniqueName: \"kubernetes.io/projected/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-kube-api-access-cx4dc\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.964167 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.964183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-config-data\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.966123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee44b824-a50b-4355-ab08-09d831323258-logs\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.967859 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-logs\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.969015 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-config-data\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.969192 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.970147 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.971414 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.971577 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-config-data\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.972500 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.981045 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee44b824-a50b-4355-ab08-09d831323258-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.982448 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxpsm\" (UniqueName: \"kubernetes.io/projected/ee44b824-a50b-4355-ab08-09d831323258-kube-api-access-nxpsm\") pod \"nova-api-0\" (UID: \"ee44b824-a50b-4355-ab08-09d831323258\") " pod="openstack/nova-api-0" Jan 26 13:03:01 crc kubenswrapper[4881]: I0126 13:03:01.983188 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4dc\" (UniqueName: \"kubernetes.io/projected/9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae-kube-api-access-cx4dc\") pod \"nova-metadata-0\" (UID: \"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae\") " pod="openstack/nova-metadata-0" Jan 26 13:03:02 crc kubenswrapper[4881]: I0126 13:03:02.094583 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563a0ca1-a6c3-4089-88a7-f23423418751" path="/var/lib/kubelet/pods/563a0ca1-a6c3-4089-88a7-f23423418751/volumes" Jan 26 13:03:02 crc kubenswrapper[4881]: I0126 13:03:02.095575 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843eec84-8a03-49b4-beda-773a76cecdbb" path="/var/lib/kubelet/pods/843eec84-8a03-49b4-beda-773a76cecdbb/volumes" Jan 26 13:03:02 crc kubenswrapper[4881]: I0126 13:03:02.114454 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:03:02 crc kubenswrapper[4881]: I0126 13:03:02.126813 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:03:02 crc kubenswrapper[4881]: W0126 13:03:02.418601 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9aa3762a_f6f1_4f99_aa95_c22f2aaf51ae.slice/crio-18515f131399d949a333bed0fa520a2238ac6194765d1dbe3ddcb63a4176ea28 WatchSource:0}: Error finding container 18515f131399d949a333bed0fa520a2238ac6194765d1dbe3ddcb63a4176ea28: Status 404 returned error can't find the container with id 18515f131399d949a333bed0fa520a2238ac6194765d1dbe3ddcb63a4176ea28 Jan 26 13:03:02 crc kubenswrapper[4881]: I0126 13:03:02.426633 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:03:02 crc kubenswrapper[4881]: I0126 13:03:02.587051 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae","Type":"ContainerStarted","Data":"18515f131399d949a333bed0fa520a2238ac6194765d1dbe3ddcb63a4176ea28"} Jan 26 13:03:02 crc kubenswrapper[4881]: I0126 13:03:02.691722 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:03:02 crc kubenswrapper[4881]: W0126 13:03:02.696314 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee44b824_a50b_4355_ab08_09d831323258.slice/crio-0c77f475088b2bd97851b722145244886c798093924a6bb115c294d78bdba728 WatchSource:0}: Error finding container 0c77f475088b2bd97851b722145244886c798093924a6bb115c294d78bdba728: Status 404 returned error can't find the container with id 0c77f475088b2bd97851b722145244886c798093924a6bb115c294d78bdba728 Jan 26 13:03:03 crc kubenswrapper[4881]: I0126 13:03:03.601426 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee44b824-a50b-4355-ab08-09d831323258","Type":"ContainerStarted","Data":"1394fba675750fab5048d9ab3c37b73c9fb8d71d09dcf0703458825828f79f25"} Jan 26 13:03:03 crc kubenswrapper[4881]: I0126 13:03:03.601778 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee44b824-a50b-4355-ab08-09d831323258","Type":"ContainerStarted","Data":"ea1e2850ae3f5fecf79f762b1ad55d0d3a073c1193fe59050772f67d5257bac9"} Jan 26 13:03:03 crc kubenswrapper[4881]: I0126 13:03:03.601796 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee44b824-a50b-4355-ab08-09d831323258","Type":"ContainerStarted","Data":"0c77f475088b2bd97851b722145244886c798093924a6bb115c294d78bdba728"} Jan 26 13:03:03 crc kubenswrapper[4881]: I0126 13:03:03.603705 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae","Type":"ContainerStarted","Data":"3fe7d73d34a34f92604b49354d8d3505c114ca8a3fbe891e7b4399a3732b63e8"} Jan 26 13:03:03 crc kubenswrapper[4881]: I0126 13:03:03.603773 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae","Type":"ContainerStarted","Data":"db6b6216a5027caa11002e5007b2d677524de8dafbf3b9bc39de0cc75c4b61e9"} Jan 26 13:03:03 crc kubenswrapper[4881]: I0126 13:03:03.630814 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.630795155 podStartE2EDuration="2.630795155s" podCreationTimestamp="2026-01-26 13:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:03:03.626322128 +0000 UTC m=+1656.105632204" watchObservedRunningTime="2026-01-26 13:03:03.630795155 +0000 UTC m=+1656.110105181" Jan 26 13:03:03 crc kubenswrapper[4881]: I0126 13:03:03.659013 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.658996037 podStartE2EDuration="2.658996037s" podCreationTimestamp="2026-01-26 13:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:03:03.650165114 +0000 UTC m=+1656.129475150" watchObservedRunningTime="2026-01-26 13:03:03.658996037 +0000 UTC m=+1656.138306063" Jan 26 13:03:03 crc kubenswrapper[4881]: I0126 13:03:03.690778 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:03:03 crc kubenswrapper[4881]: I0126 13:03:03.690867 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:03:03 crc kubenswrapper[4881]: I0126 13:03:03.749924 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.535768 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.621705 4881 generic.go:334] "Generic (PLEG): container finished" podID="4163a60f-62c2-4edb-b675-b25e408ca3bd" containerID="db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9" exitCode=0 Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.623894 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.624697 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4163a60f-62c2-4edb-b675-b25e408ca3bd","Type":"ContainerDied","Data":"db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9"} Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.624788 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4163a60f-62c2-4edb-b675-b25e408ca3bd","Type":"ContainerDied","Data":"d23ae864343c0a922a0240ca6be9285777b75cd73d0e31ec0ab55e36e285e6f5"} Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.624810 4881 scope.go:117] "RemoveContainer" containerID="db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.627668 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-combined-ca-bundle\") pod \"4163a60f-62c2-4edb-b675-b25e408ca3bd\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.628050 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snt5j\" (UniqueName: \"kubernetes.io/projected/4163a60f-62c2-4edb-b675-b25e408ca3bd-kube-api-access-snt5j\") pod \"4163a60f-62c2-4edb-b675-b25e408ca3bd\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.628304 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-config-data\") pod \"4163a60f-62c2-4edb-b675-b25e408ca3bd\" (UID: \"4163a60f-62c2-4edb-b675-b25e408ca3bd\") " Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.663155 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4163a60f-62c2-4edb-b675-b25e408ca3bd" (UID: "4163a60f-62c2-4edb-b675-b25e408ca3bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.665209 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4163a60f-62c2-4edb-b675-b25e408ca3bd-kube-api-access-snt5j" (OuterVolumeSpecName: "kube-api-access-snt5j") pod "4163a60f-62c2-4edb-b675-b25e408ca3bd" (UID: "4163a60f-62c2-4edb-b675-b25e408ca3bd"). InnerVolumeSpecName "kube-api-access-snt5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.678568 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-config-data" (OuterVolumeSpecName: "config-data") pod "4163a60f-62c2-4edb-b675-b25e408ca3bd" (UID: "4163a60f-62c2-4edb-b675-b25e408ca3bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.681988 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.735616 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj92n"] Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.737503 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.737602 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4163a60f-62c2-4edb-b675-b25e408ca3bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.737655 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snt5j\" (UniqueName: \"kubernetes.io/projected/4163a60f-62c2-4edb-b675-b25e408ca3bd-kube-api-access-snt5j\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.755894 4881 scope.go:117] "RemoveContainer" containerID="db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9" Jan 26 13:03:04 crc kubenswrapper[4881]: E0126 13:03:04.756394 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9\": container with ID starting with db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9 not found: ID does not exist" containerID="db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.756452 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9"} err="failed to get container status \"db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9\": rpc error: code = NotFound desc = could not find container \"db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9\": container with ID starting with db1f97f43c16a4f205bb068a41dd646ef20ad228b0934c96df786baa0a8514a9 not found: ID does not exist" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.959510 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.969908 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.988321 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:03:04 crc kubenswrapper[4881]: E0126 13:03:04.988803 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4163a60f-62c2-4edb-b675-b25e408ca3bd" containerName="nova-scheduler-scheduler" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.988826 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4163a60f-62c2-4edb-b675-b25e408ca3bd" containerName="nova-scheduler-scheduler" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.989488 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4163a60f-62c2-4edb-b675-b25e408ca3bd" containerName="nova-scheduler-scheduler" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.990339 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:03:04 crc kubenswrapper[4881]: I0126 13:03:04.993119 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.008170 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.043250 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq4sw\" (UniqueName: \"kubernetes.io/projected/01bf74e7-115e-4392-93c4-f6c5c578c5dc-kube-api-access-xq4sw\") pod \"nova-scheduler-0\" (UID: \"01bf74e7-115e-4392-93c4-f6c5c578c5dc\") " pod="openstack/nova-scheduler-0" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.043320 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf74e7-115e-4392-93c4-f6c5c578c5dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01bf74e7-115e-4392-93c4-f6c5c578c5dc\") " pod="openstack/nova-scheduler-0" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.043394 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf74e7-115e-4392-93c4-f6c5c578c5dc-config-data\") pod \"nova-scheduler-0\" (UID: \"01bf74e7-115e-4392-93c4-f6c5c578c5dc\") " pod="openstack/nova-scheduler-0" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.145365 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf74e7-115e-4392-93c4-f6c5c578c5dc-config-data\") pod \"nova-scheduler-0\" (UID: \"01bf74e7-115e-4392-93c4-f6c5c578c5dc\") " pod="openstack/nova-scheduler-0" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.145589 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq4sw\" (UniqueName: \"kubernetes.io/projected/01bf74e7-115e-4392-93c4-f6c5c578c5dc-kube-api-access-xq4sw\") pod \"nova-scheduler-0\" (UID: \"01bf74e7-115e-4392-93c4-f6c5c578c5dc\") " pod="openstack/nova-scheduler-0" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.145632 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf74e7-115e-4392-93c4-f6c5c578c5dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01bf74e7-115e-4392-93c4-f6c5c578c5dc\") " pod="openstack/nova-scheduler-0" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.150194 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf74e7-115e-4392-93c4-f6c5c578c5dc-config-data\") pod \"nova-scheduler-0\" (UID: \"01bf74e7-115e-4392-93c4-f6c5c578c5dc\") " pod="openstack/nova-scheduler-0" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.150370 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf74e7-115e-4392-93c4-f6c5c578c5dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01bf74e7-115e-4392-93c4-f6c5c578c5dc\") " pod="openstack/nova-scheduler-0" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.170910 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq4sw\" (UniqueName: \"kubernetes.io/projected/01bf74e7-115e-4392-93c4-f6c5c578c5dc-kube-api-access-xq4sw\") pod \"nova-scheduler-0\" (UID: \"01bf74e7-115e-4392-93c4-f6c5c578c5dc\") " pod="openstack/nova-scheduler-0" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.348005 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:03:05 crc kubenswrapper[4881]: I0126 13:03:05.854769 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:03:06 crc kubenswrapper[4881]: I0126 13:03:06.098276 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4163a60f-62c2-4edb-b675-b25e408ca3bd" path="/var/lib/kubelet/pods/4163a60f-62c2-4edb-b675-b25e408ca3bd/volumes" Jan 26 13:03:06 crc kubenswrapper[4881]: I0126 13:03:06.647766 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01bf74e7-115e-4392-93c4-f6c5c578c5dc","Type":"ContainerStarted","Data":"193186375efdfc8abc1a4bc76192a147049f756791b87bcd28034307d0afccc9"} Jan 26 13:03:06 crc kubenswrapper[4881]: I0126 13:03:06.648158 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01bf74e7-115e-4392-93c4-f6c5c578c5dc","Type":"ContainerStarted","Data":"1c13bfa359a63f34f54388d7bab296a32e831b35bda4b6a1adc5f9e14b62c955"} Jan 26 13:03:06 crc kubenswrapper[4881]: I0126 13:03:06.647882 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lj92n" podUID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerName="registry-server" containerID="cri-o://ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a" gracePeriod=2 Jan 26 13:03:06 crc kubenswrapper[4881]: I0126 13:03:06.689566 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.689541532 podStartE2EDuration="2.689541532s" podCreationTimestamp="2026-01-26 13:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:03:06.679286634 +0000 UTC m=+1659.158596740" watchObservedRunningTime="2026-01-26 13:03:06.689541532 +0000 UTC m=+1659.168851588" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.114972 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.115240 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.168216 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.189398 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-catalog-content\") pod \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.189566 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b57v5\" (UniqueName: \"kubernetes.io/projected/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-kube-api-access-b57v5\") pod \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.189775 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-utilities\") pod \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\" (UID: \"f890218f-7c5f-4227-9f47-68ebd7d7f7ae\") " Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.192395 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-utilities" (OuterVolumeSpecName: "utilities") pod "f890218f-7c5f-4227-9f47-68ebd7d7f7ae" (UID: "f890218f-7c5f-4227-9f47-68ebd7d7f7ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.226248 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-kube-api-access-b57v5" (OuterVolumeSpecName: "kube-api-access-b57v5") pod "f890218f-7c5f-4227-9f47-68ebd7d7f7ae" (UID: "f890218f-7c5f-4227-9f47-68ebd7d7f7ae"). InnerVolumeSpecName "kube-api-access-b57v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.275130 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f890218f-7c5f-4227-9f47-68ebd7d7f7ae" (UID: "f890218f-7c5f-4227-9f47-68ebd7d7f7ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.292600 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b57v5\" (UniqueName: \"kubernetes.io/projected/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-kube-api-access-b57v5\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.292636 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.292651 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f890218f-7c5f-4227-9f47-68ebd7d7f7ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.663285 4881 generic.go:334] "Generic (PLEG): container finished" podID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerID="ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a" exitCode=0 Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.663383 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj92n" event={"ID":"f890218f-7c5f-4227-9f47-68ebd7d7f7ae","Type":"ContainerDied","Data":"ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a"} Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.663839 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj92n" event={"ID":"f890218f-7c5f-4227-9f47-68ebd7d7f7ae","Type":"ContainerDied","Data":"8adc55f0f8884eb24f41ccbb0c0009402e1b988ff1b375b5773cc64a08d3d288"} Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.663874 4881 scope.go:117] "RemoveContainer" containerID="ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.663464 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj92n" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.693687 4881 scope.go:117] "RemoveContainer" containerID="2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.711667 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj92n"] Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.723265 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj92n"] Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.744357 4881 scope.go:117] "RemoveContainer" containerID="f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.773182 4881 scope.go:117] "RemoveContainer" containerID="ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a" Jan 26 13:03:07 crc kubenswrapper[4881]: E0126 13:03:07.773836 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a\": container with ID starting with ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a not found: ID does not exist" containerID="ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.773870 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a"} err="failed to get container status \"ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a\": rpc error: code = NotFound desc = could not find container \"ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a\": container with ID starting with ef3bd19bf4411137a07e726b65bdea505d372edfe6752376cdd9b67876f14d1a not found: ID does not exist" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.773891 4881 scope.go:117] "RemoveContainer" containerID="2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7" Jan 26 13:03:07 crc kubenswrapper[4881]: E0126 13:03:07.774219 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7\": container with ID starting with 2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7 not found: ID does not exist" containerID="2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.774240 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7"} err="failed to get container status \"2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7\": rpc error: code = NotFound desc = could not find container \"2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7\": container with ID starting with 2f14eb2b2593ea2f627fabce1e3449d4615af7b0c77b33a53b0b6624eb9ec9e7 not found: ID does not exist" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.774254 4881 scope.go:117] "RemoveContainer" containerID="f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20" Jan 26 13:03:07 crc kubenswrapper[4881]: E0126 13:03:07.774770 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20\": container with ID starting with f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20 not found: ID does not exist" containerID="f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20" Jan 26 13:03:07 crc kubenswrapper[4881]: I0126 13:03:07.774793 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20"} err="failed to get container status \"f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20\": rpc error: code = NotFound desc = could not find container \"f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20\": container with ID starting with f43805e33f6ee32c55c4a1034c810e030001f4479319537c411ac735a0ff6b20 not found: ID does not exist" Jan 26 13:03:08 crc kubenswrapper[4881]: I0126 13:03:08.103627 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" path="/var/lib/kubelet/pods/f890218f-7c5f-4227-9f47-68ebd7d7f7ae/volumes" Jan 26 13:03:10 crc kubenswrapper[4881]: I0126 13:03:10.348442 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 13:03:10 crc kubenswrapper[4881]: I0126 13:03:10.603245 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:03:10 crc kubenswrapper[4881]: I0126 13:03:10.664327 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:03:10 crc kubenswrapper[4881]: I0126 13:03:10.846376 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8wls"] Jan 26 13:03:11 crc kubenswrapper[4881]: I0126 13:03:11.721173 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v8wls" podUID="0b52903d-117a-4c80-a478-bfb576c26c00" containerName="registry-server" containerID="cri-o://9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb" gracePeriod=2 Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.115193 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.115604 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.127211 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.127250 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.287225 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.426325 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx24v\" (UniqueName: \"kubernetes.io/projected/0b52903d-117a-4c80-a478-bfb576c26c00-kube-api-access-rx24v\") pod \"0b52903d-117a-4c80-a478-bfb576c26c00\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.426712 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-catalog-content\") pod \"0b52903d-117a-4c80-a478-bfb576c26c00\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.427076 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-utilities\") pod \"0b52903d-117a-4c80-a478-bfb576c26c00\" (UID: \"0b52903d-117a-4c80-a478-bfb576c26c00\") " Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.428690 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-utilities" (OuterVolumeSpecName: "utilities") pod "0b52903d-117a-4c80-a478-bfb576c26c00" (UID: "0b52903d-117a-4c80-a478-bfb576c26c00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.529128 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.550360 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b52903d-117a-4c80-a478-bfb576c26c00" (UID: "0b52903d-117a-4c80-a478-bfb576c26c00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.630952 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b52903d-117a-4c80-a478-bfb576c26c00-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.732922 4881 generic.go:334] "Generic (PLEG): container finished" podID="0b52903d-117a-4c80-a478-bfb576c26c00" containerID="9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb" exitCode=0 Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.732970 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8wls" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.732966 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8wls" event={"ID":"0b52903d-117a-4c80-a478-bfb576c26c00","Type":"ContainerDied","Data":"9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb"} Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.733043 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8wls" event={"ID":"0b52903d-117a-4c80-a478-bfb576c26c00","Type":"ContainerDied","Data":"6af7701417693b31a0d01f15f75bab6d933d83c3e2a1d504419da6d0dfd6b26a"} Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.733074 4881 scope.go:117] "RemoveContainer" containerID="9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.733392 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b52903d-117a-4c80-a478-bfb576c26c00-kube-api-access-rx24v" (OuterVolumeSpecName: "kube-api-access-rx24v") pod "0b52903d-117a-4c80-a478-bfb576c26c00" (UID: "0b52903d-117a-4c80-a478-bfb576c26c00"). InnerVolumeSpecName "kube-api-access-rx24v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.757230 4881 scope.go:117] "RemoveContainer" containerID="a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.796786 4881 scope.go:117] "RemoveContainer" containerID="2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.833971 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx24v\" (UniqueName: \"kubernetes.io/projected/0b52903d-117a-4c80-a478-bfb576c26c00-kube-api-access-rx24v\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.848704 4881 scope.go:117] "RemoveContainer" containerID="9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb" Jan 26 13:03:12 crc kubenswrapper[4881]: E0126 13:03:12.849187 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb\": container with ID starting with 9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb not found: ID does not exist" containerID="9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.849224 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb"} err="failed to get container status \"9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb\": rpc error: code = NotFound desc = could not find container \"9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb\": container with ID starting with 9045366d1d6040d82e4ed31d952d9fe4ea2fb78b8e8299c43f5ee51ebb48dcfb not found: ID does not exist" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.849249 4881 scope.go:117] "RemoveContainer" containerID="a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293" Jan 26 13:03:12 crc kubenswrapper[4881]: E0126 13:03:12.849557 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293\": container with ID starting with a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293 not found: ID does not exist" containerID="a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.849575 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293"} err="failed to get container status \"a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293\": rpc error: code = NotFound desc = could not find container \"a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293\": container with ID starting with a5327bdb3cc6ac48ca11695531bd584b83ae18e0aca8f1fc8d0319de898ab293 not found: ID does not exist" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.849588 4881 scope.go:117] "RemoveContainer" containerID="2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b" Jan 26 13:03:12 crc kubenswrapper[4881]: E0126 13:03:12.849879 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b\": container with ID starting with 2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b not found: ID does not exist" containerID="2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b" Jan 26 13:03:12 crc kubenswrapper[4881]: I0126 13:03:12.849903 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b"} err="failed to get container status \"2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b\": rpc error: code = NotFound desc = could not find container \"2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b\": container with ID starting with 2c48dd3159dceb94ec0e4d265285e6d43607929b41a33db75eaaeeae00c67a1b not found: ID does not exist" Jan 26 13:03:13 crc kubenswrapper[4881]: I0126 13:03:13.085130 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8wls"] Jan 26 13:03:13 crc kubenswrapper[4881]: I0126 13:03:13.092898 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v8wls"] Jan 26 13:03:13 crc kubenswrapper[4881]: I0126 13:03:13.133707 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:03:13 crc kubenswrapper[4881]: I0126 13:03:13.133707 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:03:13 crc kubenswrapper[4881]: I0126 13:03:13.146754 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee44b824-a50b-4355-ab08-09d831323258" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:03:13 crc kubenswrapper[4881]: I0126 13:03:13.147719 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee44b824-a50b-4355-ab08-09d831323258" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:03:13 crc kubenswrapper[4881]: I0126 13:03:13.667502 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 13:03:14 crc kubenswrapper[4881]: I0126 13:03:14.099289 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b52903d-117a-4c80-a478-bfb576c26c00" path="/var/lib/kubelet/pods/0b52903d-117a-4c80-a478-bfb576c26c00/volumes" Jan 26 13:03:15 crc kubenswrapper[4881]: I0126 13:03:15.348226 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 13:03:15 crc kubenswrapper[4881]: I0126 13:03:15.392068 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 13:03:15 crc kubenswrapper[4881]: I0126 13:03:15.832146 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 13:03:22 crc kubenswrapper[4881]: I0126 13:03:22.126072 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 13:03:22 crc kubenswrapper[4881]: I0126 13:03:22.128999 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 13:03:22 crc kubenswrapper[4881]: I0126 13:03:22.137280 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 13:03:22 crc kubenswrapper[4881]: I0126 13:03:22.146241 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 13:03:22 crc kubenswrapper[4881]: I0126 13:03:22.146989 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 13:03:22 crc kubenswrapper[4881]: I0126 13:03:22.150691 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 13:03:22 crc kubenswrapper[4881]: I0126 13:03:22.169336 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 13:03:22 crc kubenswrapper[4881]: I0126 13:03:22.864225 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 13:03:22 crc kubenswrapper[4881]: I0126 13:03:22.868882 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 13:03:22 crc kubenswrapper[4881]: I0126 13:03:22.873714 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 13:03:24 crc kubenswrapper[4881]: I0126 13:03:24.789860 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:03:24 crc kubenswrapper[4881]: I0126 13:03:24.790316 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:03:31 crc kubenswrapper[4881]: I0126 13:03:31.056500 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:03:32 crc kubenswrapper[4881]: I0126 13:03:32.056951 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:03:34 crc kubenswrapper[4881]: I0126 13:03:34.361879 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a455dd78-e351-449c-903a-5c0e0c50faf5" containerName="rabbitmq" containerID="cri-o://cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d" gracePeriod=604797 Jan 26 13:03:35 crc kubenswrapper[4881]: I0126 13:03:35.287014 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" containerName="rabbitmq" containerID="cri-o://8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9" gracePeriod=604797 Jan 26 13:03:35 crc kubenswrapper[4881]: I0126 13:03:35.990314 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.010360 4881 generic.go:334] "Generic (PLEG): container finished" podID="a455dd78-e351-449c-903a-5c0e0c50faf5" containerID="cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d" exitCode=0 Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.010406 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a455dd78-e351-449c-903a-5c0e0c50faf5","Type":"ContainerDied","Data":"cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d"} Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.010434 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a455dd78-e351-449c-903a-5c0e0c50faf5","Type":"ContainerDied","Data":"2b1c76e4511c2958052ff67ec9c58e80aa17b78b3af3d374cfe24022bd108fcf"} Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.010454 4881 scope.go:117] "RemoveContainer" containerID="cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.010500 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.030196 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.030684 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a455dd78-e351-449c-903a-5c0e0c50faf5-pod-info\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.030729 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-plugins\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.030840 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-server-conf\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.030929 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-confd\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.030990 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a455dd78-e351-449c-903a-5c0e0c50faf5-erlang-cookie-secret\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.031046 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-erlang-cookie\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.031097 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jglfb\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-kube-api-access-jglfb\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.031137 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-tls\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.031247 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-config-data\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.031319 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-plugins-conf\") pod \"a455dd78-e351-449c-903a-5c0e0c50faf5\" (UID: \"a455dd78-e351-449c-903a-5c0e0c50faf5\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.033098 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.044072 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.046493 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-kube-api-access-jglfb" (OuterVolumeSpecName: "kube-api-access-jglfb") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "kube-api-access-jglfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.047049 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.049659 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a455dd78-e351-449c-903a-5c0e0c50faf5-pod-info" (OuterVolumeSpecName: "pod-info") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.066277 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a455dd78-e351-449c-903a-5c0e0c50faf5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.071980 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.087143 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.121009 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-config-data" (OuterVolumeSpecName: "config-data") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.134296 4881 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.134343 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.134353 4881 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a455dd78-e351-449c-903a-5c0e0c50faf5-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.134362 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.134372 4881 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a455dd78-e351-449c-903a-5c0e0c50faf5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.134381 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.134390 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jglfb\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-kube-api-access-jglfb\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.134398 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.134406 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.198124 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.205638 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-server-conf" (OuterVolumeSpecName: "server-conf") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.206579 4881 scope.go:117] "RemoveContainer" containerID="7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.235724 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.235752 4881 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a455dd78-e351-449c-903a-5c0e0c50faf5-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.269406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a455dd78-e351-449c-903a-5c0e0c50faf5" (UID: "a455dd78-e351-449c-903a-5c0e0c50faf5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.314690 4881 scope.go:117] "RemoveContainer" containerID="cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d" Jan 26 13:03:36 crc kubenswrapper[4881]: E0126 13:03:36.321240 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d\": container with ID starting with cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d not found: ID does not exist" containerID="cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.321282 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d"} err="failed to get container status \"cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d\": rpc error: code = NotFound desc = could not find container \"cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d\": container with ID starting with cdabe088bd4208f1a5d6c3e42d8d7bcf2e85b11d78f2b153632a66b1c157468d not found: ID does not exist" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.321306 4881 scope.go:117] "RemoveContainer" containerID="7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5" Jan 26 13:03:36 crc kubenswrapper[4881]: E0126 13:03:36.324631 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5\": container with ID starting with 7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5 not found: ID does not exist" containerID="7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.324669 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5"} err="failed to get container status \"7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5\": rpc error: code = NotFound desc = could not find container \"7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5\": container with ID starting with 7250f2723c74e8d207341ab5049c947cdf5eb7b5dafa1797f04a53d0bbac63e5 not found: ID does not exist" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.338197 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a455dd78-e351-449c-903a-5c0e0c50faf5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.389179 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.410992 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426127 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:03:36 crc kubenswrapper[4881]: E0126 13:03:36.426500 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a455dd78-e351-449c-903a-5c0e0c50faf5" containerName="rabbitmq" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426518 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a455dd78-e351-449c-903a-5c0e0c50faf5" containerName="rabbitmq" Jan 26 13:03:36 crc kubenswrapper[4881]: E0126 13:03:36.426539 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a455dd78-e351-449c-903a-5c0e0c50faf5" containerName="setup-container" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426545 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a455dd78-e351-449c-903a-5c0e0c50faf5" containerName="setup-container" Jan 26 13:03:36 crc kubenswrapper[4881]: E0126 13:03:36.426557 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerName="extract-utilities" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426563 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerName="extract-utilities" Jan 26 13:03:36 crc kubenswrapper[4881]: E0126 13:03:36.426576 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b52903d-117a-4c80-a478-bfb576c26c00" containerName="extract-utilities" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426583 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b52903d-117a-4c80-a478-bfb576c26c00" containerName="extract-utilities" Jan 26 13:03:36 crc kubenswrapper[4881]: E0126 13:03:36.426593 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerName="registry-server" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426601 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerName="registry-server" Jan 26 13:03:36 crc kubenswrapper[4881]: E0126 13:03:36.426609 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b52903d-117a-4c80-a478-bfb576c26c00" containerName="extract-content" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426615 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b52903d-117a-4c80-a478-bfb576c26c00" containerName="extract-content" Jan 26 13:03:36 crc kubenswrapper[4881]: E0126 13:03:36.426624 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerName="extract-content" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426630 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerName="extract-content" Jan 26 13:03:36 crc kubenswrapper[4881]: E0126 13:03:36.426647 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b52903d-117a-4c80-a478-bfb576c26c00" containerName="registry-server" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426653 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b52903d-117a-4c80-a478-bfb576c26c00" containerName="registry-server" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426825 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b52903d-117a-4c80-a478-bfb576c26c00" containerName="registry-server" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426836 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f890218f-7c5f-4227-9f47-68ebd7d7f7ae" containerName="registry-server" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.426847 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a455dd78-e351-449c-903a-5c0e0c50faf5" containerName="rabbitmq" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.428591 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.434425 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.434629 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.435038 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.435221 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q7rk9" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.435350 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.435485 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.444724 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.444870 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.546595 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.547100 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.547229 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.547336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a258482a-e394-4833-9bef-1fc3abc0c6a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.547441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5gt\" (UniqueName: \"kubernetes.io/projected/a258482a-e394-4833-9bef-1fc3abc0c6a7-kube-api-access-pg5gt\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.547546 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a258482a-e394-4833-9bef-1fc3abc0c6a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.547643 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a258482a-e394-4833-9bef-1fc3abc0c6a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.547768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.547886 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a258482a-e394-4833-9bef-1fc3abc0c6a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.547972 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a258482a-e394-4833-9bef-1fc3abc0c6a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.548081 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.651388 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.651723 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.651855 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.651963 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a258482a-e394-4833-9bef-1fc3abc0c6a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.652073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg5gt\" (UniqueName: \"kubernetes.io/projected/a258482a-e394-4833-9bef-1fc3abc0c6a7-kube-api-access-pg5gt\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.652166 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a258482a-e394-4833-9bef-1fc3abc0c6a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.652243 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a258482a-e394-4833-9bef-1fc3abc0c6a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.652372 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.652472 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a258482a-e394-4833-9bef-1fc3abc0c6a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.652584 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a258482a-e394-4833-9bef-1fc3abc0c6a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.652692 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.653096 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.654510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.679408 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.683138 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a258482a-e394-4833-9bef-1fc3abc0c6a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.690199 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a258482a-e394-4833-9bef-1fc3abc0c6a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.692302 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.694117 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a258482a-e394-4833-9bef-1fc3abc0c6a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.695078 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a258482a-e394-4833-9bef-1fc3abc0c6a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.705242 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a258482a-e394-4833-9bef-1fc3abc0c6a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.706196 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a258482a-e394-4833-9bef-1fc3abc0c6a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.721348 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg5gt\" (UniqueName: \"kubernetes.io/projected/a258482a-e394-4833-9bef-1fc3abc0c6a7-kube-api-access-pg5gt\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.745495 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a258482a-e394-4833-9bef-1fc3abc0c6a7\") " pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.766860 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.861796 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.969843 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-server-conf\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.969916 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-erlang-cookie-secret\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.969948 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-plugins\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.969968 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-plugins-conf\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.970002 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-pod-info\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.970039 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-erlang-cookie\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.970090 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-tls\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.970117 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b22mj\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-kube-api-access-b22mj\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.970139 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.970209 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-config-data\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.970235 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-confd\") pod \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\" (UID: \"fb687a6e-7e1f-4697-8ab1-88ad03dd2951\") " Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.975035 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.975595 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.976173 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.987358 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.987854 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-pod-info" (OuterVolumeSpecName: "pod-info") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.987859 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.989246 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:36 crc kubenswrapper[4881]: I0126 13:03:36.991760 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-kube-api-access-b22mj" (OuterVolumeSpecName: "kube-api-access-b22mj") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "kube-api-access-b22mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.024596 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-config-data" (OuterVolumeSpecName: "config-data") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.035794 4881 generic.go:334] "Generic (PLEG): container finished" podID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" containerID="8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9" exitCode=0 Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.035856 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb687a6e-7e1f-4697-8ab1-88ad03dd2951","Type":"ContainerDied","Data":"8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9"} Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.035976 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb687a6e-7e1f-4697-8ab1-88ad03dd2951","Type":"ContainerDied","Data":"bbc2b54d76f963912cc62f1957efe0cfca240f59a2c4c01117f97e433c1741f3"} Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.035931 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.035993 4881 scope.go:117] "RemoveContainer" containerID="8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.060550 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-server-conf" (OuterVolumeSpecName: "server-conf") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.071429 4881 scope.go:117] "RemoveContainer" containerID="b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.074646 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b22mj\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-kube-api-access-b22mj\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.074686 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.074711 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.074721 4881 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.074729 4881 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.074737 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.074744 4881 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.074751 4881 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.074760 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.074768 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.109043 4881 scope.go:117] "RemoveContainer" containerID="8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9" Jan 26 13:03:37 crc kubenswrapper[4881]: E0126 13:03:37.109860 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9\": container with ID starting with 8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9 not found: ID does not exist" containerID="8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.109924 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9"} err="failed to get container status \"8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9\": rpc error: code = NotFound desc = could not find container \"8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9\": container with ID starting with 8f52705d21d02026b990bfeff9344bb0298572378db854544d7a243fac416eb9 not found: ID does not exist" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.109951 4881 scope.go:117] "RemoveContainer" containerID="b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9" Jan 26 13:03:37 crc kubenswrapper[4881]: E0126 13:03:37.110329 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9\": container with ID starting with b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9 not found: ID does not exist" containerID="b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.110375 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9"} err="failed to get container status \"b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9\": rpc error: code = NotFound desc = could not find container \"b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9\": container with ID starting with b51af514578b9120c05cfe232e2a4c498bd4e40f69c0be322b0907415bfed6a9 not found: ID does not exist" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.111814 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.116057 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fb687a6e-7e1f-4697-8ab1-88ad03dd2951" (UID: "fb687a6e-7e1f-4697-8ab1-88ad03dd2951"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.176930 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.176954 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb687a6e-7e1f-4697-8ab1-88ad03dd2951-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.280605 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.394486 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.406075 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.414203 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:03:37 crc kubenswrapper[4881]: E0126 13:03:37.414717 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" containerName="rabbitmq" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.414738 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" containerName="rabbitmq" Jan 26 13:03:37 crc kubenswrapper[4881]: E0126 13:03:37.414790 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" containerName="setup-container" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.414801 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" containerName="setup-container" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.415019 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" containerName="rabbitmq" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.416237 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.418649 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.418840 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.420822 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.420956 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.421084 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r8cwx" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.421174 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.421533 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.430232 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.584920 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.584982 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.585032 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.585055 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmwn\" (UniqueName: \"kubernetes.io/projected/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-kube-api-access-wmmwn\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.585087 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.585109 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.585195 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.585219 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.585256 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.585274 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.585393 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.686793 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.686865 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.686895 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.686935 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.686958 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmmwn\" (UniqueName: \"kubernetes.io/projected/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-kube-api-access-wmmwn\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.686984 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.687003 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.687046 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.687069 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.687105 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.687128 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.687770 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.687856 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.688069 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.688802 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.688963 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.688974 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.691509 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.694315 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.694342 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.703560 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.708446 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmmwn\" (UniqueName: \"kubernetes.io/projected/a2a9efa3-8ac2-40ec-a543-b3a2013e8b39-kube-api-access-wmmwn\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:37 crc kubenswrapper[4881]: I0126 13:03:37.734491 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:38 crc kubenswrapper[4881]: I0126 13:03:38.032859 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:03:38 crc kubenswrapper[4881]: I0126 13:03:38.047694 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a258482a-e394-4833-9bef-1fc3abc0c6a7","Type":"ContainerStarted","Data":"a44cd4e70a25819395ece120717c4e37bbc7406cbd62e0184d5f0cbd66c3b321"} Jan 26 13:03:38 crc kubenswrapper[4881]: I0126 13:03:38.114024 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a455dd78-e351-449c-903a-5c0e0c50faf5" path="/var/lib/kubelet/pods/a455dd78-e351-449c-903a-5c0e0c50faf5/volumes" Jan 26 13:03:38 crc kubenswrapper[4881]: I0126 13:03:38.114817 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb687a6e-7e1f-4697-8ab1-88ad03dd2951" path="/var/lib/kubelet/pods/fb687a6e-7e1f-4697-8ab1-88ad03dd2951/volumes" Jan 26 13:03:38 crc kubenswrapper[4881]: I0126 13:03:38.544855 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:03:38 crc kubenswrapper[4881]: W0126 13:03:38.548765 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2a9efa3_8ac2_40ec_a543_b3a2013e8b39.slice/crio-86340ae9ef71e5186058f4adaa9c4f65476f63d6aa14fe874eec4b8920cbdfda WatchSource:0}: Error finding container 86340ae9ef71e5186058f4adaa9c4f65476f63d6aa14fe874eec4b8920cbdfda: Status 404 returned error can't find the container with id 86340ae9ef71e5186058f4adaa9c4f65476f63d6aa14fe874eec4b8920cbdfda Jan 26 13:03:39 crc kubenswrapper[4881]: I0126 13:03:39.063272 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a258482a-e394-4833-9bef-1fc3abc0c6a7","Type":"ContainerStarted","Data":"5ecb3c10448e3a4a1bfb3255000aac7c635d53750e98269b35e3f9564a89af3d"} Jan 26 13:03:39 crc kubenswrapper[4881]: I0126 13:03:39.065139 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39","Type":"ContainerStarted","Data":"86340ae9ef71e5186058f4adaa9c4f65476f63d6aa14fe874eec4b8920cbdfda"} Jan 26 13:03:41 crc kubenswrapper[4881]: I0126 13:03:41.094998 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39","Type":"ContainerStarted","Data":"91ce3461ffe9178168e64ba3bcad146204f2d1c9f1856afdefa6f2cd6c0396cc"} Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.391255 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8545fb859-fwqgs"] Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.393347 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.396016 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.407203 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8545fb859-fwqgs"] Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.444623 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-openstack-edpm-ipam\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.444942 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-swift-storage-0\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.444993 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-config\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.445034 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-sb\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.445096 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-svc\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.445119 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grt82\" (UniqueName: \"kubernetes.io/projected/b919d749-dc4f-4906-862d-782f2098d940-kube-api-access-grt82\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.445175 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-nb\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.552667 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-config\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.552733 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-sb\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.552798 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-svc\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.552823 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grt82\" (UniqueName: \"kubernetes.io/projected/b919d749-dc4f-4906-862d-782f2098d940-kube-api-access-grt82\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.552877 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-nb\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.552921 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-openstack-edpm-ipam\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.552936 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-swift-storage-0\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.553817 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-swift-storage-0\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.554388 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-config\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.554625 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-openstack-edpm-ipam\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.554817 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-nb\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.554911 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-sb\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.555014 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-svc\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.582071 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grt82\" (UniqueName: \"kubernetes.io/projected/b919d749-dc4f-4906-862d-782f2098d940-kube-api-access-grt82\") pod \"dnsmasq-dns-8545fb859-fwqgs\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:45 crc kubenswrapper[4881]: I0126 13:03:45.744902 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:46 crc kubenswrapper[4881]: I0126 13:03:46.202409 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8545fb859-fwqgs"] Jan 26 13:03:47 crc kubenswrapper[4881]: I0126 13:03:47.175746 4881 generic.go:334] "Generic (PLEG): container finished" podID="b919d749-dc4f-4906-862d-782f2098d940" containerID="f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8" exitCode=0 Jan 26 13:03:47 crc kubenswrapper[4881]: I0126 13:03:47.175859 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" event={"ID":"b919d749-dc4f-4906-862d-782f2098d940","Type":"ContainerDied","Data":"f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8"} Jan 26 13:03:47 crc kubenswrapper[4881]: I0126 13:03:47.176383 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" event={"ID":"b919d749-dc4f-4906-862d-782f2098d940","Type":"ContainerStarted","Data":"7f6a5766a3fc1008233ed7295355a194ca778ba0fd302028ce26983cdca34b30"} Jan 26 13:03:48 crc kubenswrapper[4881]: I0126 13:03:48.193897 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" event={"ID":"b919d749-dc4f-4906-862d-782f2098d940","Type":"ContainerStarted","Data":"9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000"} Jan 26 13:03:48 crc kubenswrapper[4881]: I0126 13:03:48.194438 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:48 crc kubenswrapper[4881]: I0126 13:03:48.235824 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" podStartSLOduration=3.23579186 podStartE2EDuration="3.23579186s" podCreationTimestamp="2026-01-26 13:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:03:48.221174436 +0000 UTC m=+1700.700484472" watchObservedRunningTime="2026-01-26 13:03:48.23579186 +0000 UTC m=+1700.715101916" Jan 26 13:03:54 crc kubenswrapper[4881]: I0126 13:03:54.789721 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:03:54 crc kubenswrapper[4881]: I0126 13:03:54.790483 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:03:54 crc kubenswrapper[4881]: I0126 13:03:54.790588 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 13:03:54 crc kubenswrapper[4881]: I0126 13:03:54.791753 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:03:54 crc kubenswrapper[4881]: I0126 13:03:54.791859 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" gracePeriod=600 Jan 26 13:03:54 crc kubenswrapper[4881]: E0126 13:03:54.929297 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:03:55 crc kubenswrapper[4881]: I0126 13:03:55.280060 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" exitCode=0 Jan 26 13:03:55 crc kubenswrapper[4881]: I0126 13:03:55.280129 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3"} Jan 26 13:03:55 crc kubenswrapper[4881]: I0126 13:03:55.280281 4881 scope.go:117] "RemoveContainer" containerID="7c058347a35682b737f4fed8273f3335b15404e9e17abbca4140d7f0cbd3f241" Jan 26 13:03:55 crc kubenswrapper[4881]: I0126 13:03:55.281724 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:03:55 crc kubenswrapper[4881]: E0126 13:03:55.282212 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:03:55 crc kubenswrapper[4881]: I0126 13:03:55.746779 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:03:55 crc kubenswrapper[4881]: I0126 13:03:55.826550 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6999845677-q7hsz"] Jan 26 13:03:55 crc kubenswrapper[4881]: I0126 13:03:55.826819 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6999845677-q7hsz" podUID="79368344-12b4-4647-bce3-f74ede4f953a" containerName="dnsmasq-dns" containerID="cri-o://915bed7a2d6c4043946625c843b68c119fd64cd9e10e25880aca9a696dc5993a" gracePeriod=10 Jan 26 13:03:55 crc kubenswrapper[4881]: I0126 13:03:55.982032 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66968b76ff-2mrpm"] Jan 26 13:03:55 crc kubenswrapper[4881]: I0126 13:03:55.992131 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.017937 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66968b76ff-2mrpm"] Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.205704 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-ovsdbserver-nb\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.206322 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz56v\" (UniqueName: \"kubernetes.io/projected/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-kube-api-access-fz56v\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.206407 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-dns-svc\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.206483 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-dns-swift-storage-0\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.206585 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-openstack-edpm-ipam\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.206771 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-config\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.206817 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-ovsdbserver-sb\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.300277 4881 generic.go:334] "Generic (PLEG): container finished" podID="79368344-12b4-4647-bce3-f74ede4f953a" containerID="915bed7a2d6c4043946625c843b68c119fd64cd9e10e25880aca9a696dc5993a" exitCode=0 Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.300316 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999845677-q7hsz" event={"ID":"79368344-12b4-4647-bce3-f74ede4f953a","Type":"ContainerDied","Data":"915bed7a2d6c4043946625c843b68c119fd64cd9e10e25880aca9a696dc5993a"} Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.312066 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz56v\" (UniqueName: \"kubernetes.io/projected/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-kube-api-access-fz56v\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.312125 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-dns-svc\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.312165 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-dns-swift-storage-0\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.312191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-openstack-edpm-ipam\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.312229 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-config\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.312248 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-ovsdbserver-sb\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.312345 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-ovsdbserver-nb\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.313169 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-ovsdbserver-nb\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.313169 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-dns-swift-storage-0\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.314461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-dns-svc\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.315168 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-config\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.315808 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-openstack-edpm-ipam\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.315866 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-ovsdbserver-sb\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.330635 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz56v\" (UniqueName: \"kubernetes.io/projected/ca4b205e-5485-43e7-ab0c-b6cfae7c9a18-kube-api-access-fz56v\") pod \"dnsmasq-dns-66968b76ff-2mrpm\" (UID: \"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18\") " pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.408911 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.413666 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-sb\") pod \"79368344-12b4-4647-bce3-f74ede4f953a\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.413720 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-svc\") pod \"79368344-12b4-4647-bce3-f74ede4f953a\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.413780 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-config\") pod \"79368344-12b4-4647-bce3-f74ede4f953a\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.413802 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-swift-storage-0\") pod \"79368344-12b4-4647-bce3-f74ede4f953a\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.413828 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-nb\") pod \"79368344-12b4-4647-bce3-f74ede4f953a\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.413847 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrqft\" (UniqueName: \"kubernetes.io/projected/79368344-12b4-4647-bce3-f74ede4f953a-kube-api-access-lrqft\") pod \"79368344-12b4-4647-bce3-f74ede4f953a\" (UID: \"79368344-12b4-4647-bce3-f74ede4f953a\") " Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.419774 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79368344-12b4-4647-bce3-f74ede4f953a-kube-api-access-lrqft" (OuterVolumeSpecName: "kube-api-access-lrqft") pod "79368344-12b4-4647-bce3-f74ede4f953a" (UID: "79368344-12b4-4647-bce3-f74ede4f953a"). InnerVolumeSpecName "kube-api-access-lrqft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.475708 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-config" (OuterVolumeSpecName: "config") pod "79368344-12b4-4647-bce3-f74ede4f953a" (UID: "79368344-12b4-4647-bce3-f74ede4f953a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.493256 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79368344-12b4-4647-bce3-f74ede4f953a" (UID: "79368344-12b4-4647-bce3-f74ede4f953a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.506254 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79368344-12b4-4647-bce3-f74ede4f953a" (UID: "79368344-12b4-4647-bce3-f74ede4f953a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.515309 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.515334 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.515344 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.515352 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrqft\" (UniqueName: \"kubernetes.io/projected/79368344-12b4-4647-bce3-f74ede4f953a-kube-api-access-lrqft\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.520230 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79368344-12b4-4647-bce3-f74ede4f953a" (UID: "79368344-12b4-4647-bce3-f74ede4f953a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.526164 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79368344-12b4-4647-bce3-f74ede4f953a" (UID: "79368344-12b4-4647-bce3-f74ede4f953a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.616646 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.616674 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79368344-12b4-4647-bce3-f74ede4f953a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:03:56 crc kubenswrapper[4881]: I0126 13:03:56.630247 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:57 crc kubenswrapper[4881]: I0126 13:03:57.148999 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66968b76ff-2mrpm"] Jan 26 13:03:57 crc kubenswrapper[4881]: I0126 13:03:57.317460 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999845677-q7hsz" event={"ID":"79368344-12b4-4647-bce3-f74ede4f953a","Type":"ContainerDied","Data":"e433ef8c68cbdaf41c49a8c35fd84c4c9014dd5f7ba03db7800d752b6e350524"} Jan 26 13:03:57 crc kubenswrapper[4881]: I0126 13:03:57.317567 4881 scope.go:117] "RemoveContainer" containerID="915bed7a2d6c4043946625c843b68c119fd64cd9e10e25880aca9a696dc5993a" Jan 26 13:03:57 crc kubenswrapper[4881]: I0126 13:03:57.317748 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999845677-q7hsz" Jan 26 13:03:57 crc kubenswrapper[4881]: I0126 13:03:57.326462 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" event={"ID":"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18","Type":"ContainerStarted","Data":"b1b5198d34712237b24480cc6d558f482e7a818d8b3a0a23be69edae68be0422"} Jan 26 13:03:57 crc kubenswrapper[4881]: I0126 13:03:57.375584 4881 scope.go:117] "RemoveContainer" containerID="7c9e8b3e90a16ac3a415ec320a421d9d197bcb7d41fddfcf5d746b10f5be414d" Jan 26 13:03:57 crc kubenswrapper[4881]: I0126 13:03:57.405957 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6999845677-q7hsz"] Jan 26 13:03:57 crc kubenswrapper[4881]: I0126 13:03:57.417163 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6999845677-q7hsz"] Jan 26 13:03:58 crc kubenswrapper[4881]: I0126 13:03:58.096893 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79368344-12b4-4647-bce3-f74ede4f953a" path="/var/lib/kubelet/pods/79368344-12b4-4647-bce3-f74ede4f953a/volumes" Jan 26 13:03:58 crc kubenswrapper[4881]: I0126 13:03:58.343161 4881 generic.go:334] "Generic (PLEG): container finished" podID="ca4b205e-5485-43e7-ab0c-b6cfae7c9a18" containerID="f326ac29d714eec0fa4215a79f3ea9cf548257643ff054aae483dc39811bc0d9" exitCode=0 Jan 26 13:03:58 crc kubenswrapper[4881]: I0126 13:03:58.343234 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" event={"ID":"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18","Type":"ContainerDied","Data":"f326ac29d714eec0fa4215a79f3ea9cf548257643ff054aae483dc39811bc0d9"} Jan 26 13:03:59 crc kubenswrapper[4881]: I0126 13:03:59.362655 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" event={"ID":"ca4b205e-5485-43e7-ab0c-b6cfae7c9a18","Type":"ContainerStarted","Data":"aedbae9d3f982dd41504a135a0379b5ffa9d6153ab4cb422b120a569a8a7b994"} Jan 26 13:03:59 crc kubenswrapper[4881]: I0126 13:03:59.363844 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:03:59 crc kubenswrapper[4881]: I0126 13:03:59.398253 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" podStartSLOduration=4.398219773 podStartE2EDuration="4.398219773s" podCreationTimestamp="2026-01-26 13:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:03:59.389978784 +0000 UTC m=+1711.869288840" watchObservedRunningTime="2026-01-26 13:03:59.398219773 +0000 UTC m=+1711.877529839" Jan 26 13:04:06 crc kubenswrapper[4881]: I0126 13:04:06.633408 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66968b76ff-2mrpm" Jan 26 13:04:06 crc kubenswrapper[4881]: I0126 13:04:06.702861 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8545fb859-fwqgs"] Jan 26 13:04:06 crc kubenswrapper[4881]: I0126 13:04:06.703564 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" podUID="b919d749-dc4f-4906-862d-782f2098d940" containerName="dnsmasq-dns" containerID="cri-o://9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000" gracePeriod=10 Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.083553 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:04:07 crc kubenswrapper[4881]: E0126 13:04:07.084081 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.177436 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.329946 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grt82\" (UniqueName: \"kubernetes.io/projected/b919d749-dc4f-4906-862d-782f2098d940-kube-api-access-grt82\") pod \"b919d749-dc4f-4906-862d-782f2098d940\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.330186 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-swift-storage-0\") pod \"b919d749-dc4f-4906-862d-782f2098d940\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.330397 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-nb\") pod \"b919d749-dc4f-4906-862d-782f2098d940\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.330496 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-sb\") pod \"b919d749-dc4f-4906-862d-782f2098d940\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.330584 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-openstack-edpm-ipam\") pod \"b919d749-dc4f-4906-862d-782f2098d940\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.330694 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-config\") pod \"b919d749-dc4f-4906-862d-782f2098d940\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.330768 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-svc\") pod \"b919d749-dc4f-4906-862d-782f2098d940\" (UID: \"b919d749-dc4f-4906-862d-782f2098d940\") " Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.342737 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b919d749-dc4f-4906-862d-782f2098d940-kube-api-access-grt82" (OuterVolumeSpecName: "kube-api-access-grt82") pod "b919d749-dc4f-4906-862d-782f2098d940" (UID: "b919d749-dc4f-4906-862d-782f2098d940"). InnerVolumeSpecName "kube-api-access-grt82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.386479 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b919d749-dc4f-4906-862d-782f2098d940" (UID: "b919d749-dc4f-4906-862d-782f2098d940"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.414000 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b919d749-dc4f-4906-862d-782f2098d940" (UID: "b919d749-dc4f-4906-862d-782f2098d940"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.417614 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b919d749-dc4f-4906-862d-782f2098d940" (UID: "b919d749-dc4f-4906-862d-782f2098d940"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.419170 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b919d749-dc4f-4906-862d-782f2098d940" (UID: "b919d749-dc4f-4906-862d-782f2098d940"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.422021 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-config" (OuterVolumeSpecName: "config") pod "b919d749-dc4f-4906-862d-782f2098d940" (UID: "b919d749-dc4f-4906-862d-782f2098d940"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.428585 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b919d749-dc4f-4906-862d-782f2098d940" (UID: "b919d749-dc4f-4906-862d-782f2098d940"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.433256 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.433283 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.433293 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grt82\" (UniqueName: \"kubernetes.io/projected/b919d749-dc4f-4906-862d-782f2098d940-kube-api-access-grt82\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.433304 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.433313 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.433322 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.433330 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b919d749-dc4f-4906-862d-782f2098d940-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.465552 4881 generic.go:334] "Generic (PLEG): container finished" podID="b919d749-dc4f-4906-862d-782f2098d940" containerID="9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000" exitCode=0 Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.465595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" event={"ID":"b919d749-dc4f-4906-862d-782f2098d940","Type":"ContainerDied","Data":"9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000"} Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.465609 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.465622 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8545fb859-fwqgs" event={"ID":"b919d749-dc4f-4906-862d-782f2098d940","Type":"ContainerDied","Data":"7f6a5766a3fc1008233ed7295355a194ca778ba0fd302028ce26983cdca34b30"} Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.465639 4881 scope.go:117] "RemoveContainer" containerID="9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.501318 4881 scope.go:117] "RemoveContainer" containerID="f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.501657 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8545fb859-fwqgs"] Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.509690 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8545fb859-fwqgs"] Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.530226 4881 scope.go:117] "RemoveContainer" containerID="9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000" Jan 26 13:04:07 crc kubenswrapper[4881]: E0126 13:04:07.535040 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000\": container with ID starting with 9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000 not found: ID does not exist" containerID="9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.535234 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000"} err="failed to get container status \"9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000\": rpc error: code = NotFound desc = could not find container \"9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000\": container with ID starting with 9894636b812e2abc66913cc90107ba3e96748f50dc44d38262623c7b828bf000 not found: ID does not exist" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.535381 4881 scope.go:117] "RemoveContainer" containerID="f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8" Jan 26 13:04:07 crc kubenswrapper[4881]: E0126 13:04:07.538928 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8\": container with ID starting with f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8 not found: ID does not exist" containerID="f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8" Jan 26 13:04:07 crc kubenswrapper[4881]: I0126 13:04:07.538979 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8"} err="failed to get container status \"f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8\": rpc error: code = NotFound desc = could not find container \"f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8\": container with ID starting with f47b8f32bfc9fc285f861cabcf529edc9a6268b41ec7d81cc1aa2a4c93880ee8 not found: ID does not exist" Jan 26 13:04:07 crc kubenswrapper[4881]: E0126 13:04:07.601669 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb919d749_dc4f_4906_862d_782f2098d940.slice/crio-7f6a5766a3fc1008233ed7295355a194ca778ba0fd302028ce26983cdca34b30\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb919d749_dc4f_4906_862d_782f2098d940.slice\": RecentStats: unable to find data in memory cache]" Jan 26 13:04:08 crc kubenswrapper[4881]: I0126 13:04:08.096841 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b919d749-dc4f-4906-862d-782f2098d940" path="/var/lib/kubelet/pods/b919d749-dc4f-4906-862d-782f2098d940/volumes" Jan 26 13:04:12 crc kubenswrapper[4881]: I0126 13:04:12.534733 4881 generic.go:334] "Generic (PLEG): container finished" podID="a258482a-e394-4833-9bef-1fc3abc0c6a7" containerID="5ecb3c10448e3a4a1bfb3255000aac7c635d53750e98269b35e3f9564a89af3d" exitCode=0 Jan 26 13:04:12 crc kubenswrapper[4881]: I0126 13:04:12.534812 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a258482a-e394-4833-9bef-1fc3abc0c6a7","Type":"ContainerDied","Data":"5ecb3c10448e3a4a1bfb3255000aac7c635d53750e98269b35e3f9564a89af3d"} Jan 26 13:04:13 crc kubenswrapper[4881]: I0126 13:04:13.560246 4881 generic.go:334] "Generic (PLEG): container finished" podID="a2a9efa3-8ac2-40ec-a543-b3a2013e8b39" containerID="91ce3461ffe9178168e64ba3bcad146204f2d1c9f1856afdefa6f2cd6c0396cc" exitCode=0 Jan 26 13:04:13 crc kubenswrapper[4881]: I0126 13:04:13.560650 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39","Type":"ContainerDied","Data":"91ce3461ffe9178168e64ba3bcad146204f2d1c9f1856afdefa6f2cd6c0396cc"} Jan 26 13:04:13 crc kubenswrapper[4881]: I0126 13:04:13.564902 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a258482a-e394-4833-9bef-1fc3abc0c6a7","Type":"ContainerStarted","Data":"889f544883d82616e694d4d1ae855db1be898c3f7aa59184978dd5c2b9e0933e"} Jan 26 13:04:13 crc kubenswrapper[4881]: I0126 13:04:13.565358 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 13:04:13 crc kubenswrapper[4881]: I0126 13:04:13.641284 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.6412592 podStartE2EDuration="37.6412592s" podCreationTimestamp="2026-01-26 13:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:04:13.62639062 +0000 UTC m=+1726.105700636" watchObservedRunningTime="2026-01-26 13:04:13.6412592 +0000 UTC m=+1726.120569266" Jan 26 13:04:14 crc kubenswrapper[4881]: I0126 13:04:14.576770 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2a9efa3-8ac2-40ec-a543-b3a2013e8b39","Type":"ContainerStarted","Data":"4c123c7114847fa47994c16496330461ef36feb9f93e274c1da31b17132dc5fa"} Jan 26 13:04:14 crc kubenswrapper[4881]: I0126 13:04:14.577818 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:04:21 crc kubenswrapper[4881]: I0126 13:04:21.083347 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:04:21 crc kubenswrapper[4881]: E0126 13:04:21.084712 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.783145 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.783120056 podStartE2EDuration="47.783120056s" podCreationTimestamp="2026-01-26 13:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:04:14.61123514 +0000 UTC m=+1727.090545196" watchObservedRunningTime="2026-01-26 13:04:24.783120056 +0000 UTC m=+1737.262430082" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.793313 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22"] Jan 26 13:04:24 crc kubenswrapper[4881]: E0126 13:04:24.793856 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b919d749-dc4f-4906-862d-782f2098d940" containerName="init" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.793878 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b919d749-dc4f-4906-862d-782f2098d940" containerName="init" Jan 26 13:04:24 crc kubenswrapper[4881]: E0126 13:04:24.793897 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79368344-12b4-4647-bce3-f74ede4f953a" containerName="dnsmasq-dns" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.793906 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="79368344-12b4-4647-bce3-f74ede4f953a" containerName="dnsmasq-dns" Jan 26 13:04:24 crc kubenswrapper[4881]: E0126 13:04:24.793926 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b919d749-dc4f-4906-862d-782f2098d940" containerName="dnsmasq-dns" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.793933 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b919d749-dc4f-4906-862d-782f2098d940" containerName="dnsmasq-dns" Jan 26 13:04:24 crc kubenswrapper[4881]: E0126 13:04:24.793965 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79368344-12b4-4647-bce3-f74ede4f953a" containerName="init" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.793971 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="79368344-12b4-4647-bce3-f74ede4f953a" containerName="init" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.794148 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="79368344-12b4-4647-bce3-f74ede4f953a" containerName="dnsmasq-dns" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.794167 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b919d749-dc4f-4906-862d-782f2098d940" containerName="dnsmasq-dns" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.795118 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.798501 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.798575 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.798710 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.801543 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.819192 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22"] Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.905987 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.906223 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2qrg\" (UniqueName: \"kubernetes.io/projected/3aaf2506-3e1b-4edd-af45-c98419359e59-kube-api-access-h2qrg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.906348 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:24 crc kubenswrapper[4881]: I0126 13:04:24.906431 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:25 crc kubenswrapper[4881]: I0126 13:04:25.008711 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:25 crc kubenswrapper[4881]: I0126 13:04:25.008798 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2qrg\" (UniqueName: \"kubernetes.io/projected/3aaf2506-3e1b-4edd-af45-c98419359e59-kube-api-access-h2qrg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:25 crc kubenswrapper[4881]: I0126 13:04:25.008852 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:25 crc kubenswrapper[4881]: I0126 13:04:25.008893 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:25 crc kubenswrapper[4881]: I0126 13:04:25.014828 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:25 crc kubenswrapper[4881]: I0126 13:04:25.020488 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:25 crc kubenswrapper[4881]: I0126 13:04:25.020816 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:25 crc kubenswrapper[4881]: I0126 13:04:25.026051 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2qrg\" (UniqueName: \"kubernetes.io/projected/3aaf2506-3e1b-4edd-af45-c98419359e59-kube-api-access-h2qrg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:25 crc kubenswrapper[4881]: I0126 13:04:25.120771 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:25 crc kubenswrapper[4881]: I0126 13:04:25.730829 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22"] Jan 26 13:04:26 crc kubenswrapper[4881]: I0126 13:04:26.684716 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" event={"ID":"3aaf2506-3e1b-4edd-af45-c98419359e59","Type":"ContainerStarted","Data":"68a1c7ecf99aa7131270542b6728084772a813c4651e665b21552305c5857512"} Jan 26 13:04:26 crc kubenswrapper[4881]: I0126 13:04:26.775015 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 13:04:28 crc kubenswrapper[4881]: I0126 13:04:28.038160 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:04:36 crc kubenswrapper[4881]: I0126 13:04:36.084407 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:04:36 crc kubenswrapper[4881]: E0126 13:04:36.085916 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:04:37 crc kubenswrapper[4881]: I0126 13:04:37.671162 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:04:38 crc kubenswrapper[4881]: I0126 13:04:38.798276 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" event={"ID":"3aaf2506-3e1b-4edd-af45-c98419359e59","Type":"ContainerStarted","Data":"c26e817f2f9cc5394a5440e814fa8b9e2ecc0620eb04f3fc7cce78c1bafd2101"} Jan 26 13:04:38 crc kubenswrapper[4881]: I0126 13:04:38.832333 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" podStartSLOduration=2.898098729 podStartE2EDuration="14.832315143s" podCreationTimestamp="2026-01-26 13:04:24 +0000 UTC" firstStartedPulling="2026-01-26 13:04:25.734416205 +0000 UTC m=+1738.213726241" lastFinishedPulling="2026-01-26 13:04:37.668632609 +0000 UTC m=+1750.147942655" observedRunningTime="2026-01-26 13:04:38.822155113 +0000 UTC m=+1751.301465149" watchObservedRunningTime="2026-01-26 13:04:38.832315143 +0000 UTC m=+1751.311625169" Jan 26 13:04:47 crc kubenswrapper[4881]: I0126 13:04:47.082696 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:04:47 crc kubenswrapper[4881]: E0126 13:04:47.083596 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:04:49 crc kubenswrapper[4881]: I0126 13:04:49.920328 4881 generic.go:334] "Generic (PLEG): container finished" podID="3aaf2506-3e1b-4edd-af45-c98419359e59" containerID="c26e817f2f9cc5394a5440e814fa8b9e2ecc0620eb04f3fc7cce78c1bafd2101" exitCode=0 Jan 26 13:04:49 crc kubenswrapper[4881]: I0126 13:04:49.920470 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" event={"ID":"3aaf2506-3e1b-4edd-af45-c98419359e59","Type":"ContainerDied","Data":"c26e817f2f9cc5394a5440e814fa8b9e2ecc0620eb04f3fc7cce78c1bafd2101"} Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.437858 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.491862 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2qrg\" (UniqueName: \"kubernetes.io/projected/3aaf2506-3e1b-4edd-af45-c98419359e59-kube-api-access-h2qrg\") pod \"3aaf2506-3e1b-4edd-af45-c98419359e59\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.491993 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-ssh-key-openstack-edpm-ipam\") pod \"3aaf2506-3e1b-4edd-af45-c98419359e59\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.492118 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-inventory\") pod \"3aaf2506-3e1b-4edd-af45-c98419359e59\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.492226 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-repo-setup-combined-ca-bundle\") pod \"3aaf2506-3e1b-4edd-af45-c98419359e59\" (UID: \"3aaf2506-3e1b-4edd-af45-c98419359e59\") " Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.502844 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3aaf2506-3e1b-4edd-af45-c98419359e59" (UID: "3aaf2506-3e1b-4edd-af45-c98419359e59"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.502949 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aaf2506-3e1b-4edd-af45-c98419359e59-kube-api-access-h2qrg" (OuterVolumeSpecName: "kube-api-access-h2qrg") pod "3aaf2506-3e1b-4edd-af45-c98419359e59" (UID: "3aaf2506-3e1b-4edd-af45-c98419359e59"). InnerVolumeSpecName "kube-api-access-h2qrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.523681 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3aaf2506-3e1b-4edd-af45-c98419359e59" (UID: "3aaf2506-3e1b-4edd-af45-c98419359e59"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.524797 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-inventory" (OuterVolumeSpecName: "inventory") pod "3aaf2506-3e1b-4edd-af45-c98419359e59" (UID: "3aaf2506-3e1b-4edd-af45-c98419359e59"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.595122 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.595158 4881 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.595174 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2qrg\" (UniqueName: \"kubernetes.io/projected/3aaf2506-3e1b-4edd-af45-c98419359e59-kube-api-access-h2qrg\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.595188 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aaf2506-3e1b-4edd-af45-c98419359e59-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.948491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" event={"ID":"3aaf2506-3e1b-4edd-af45-c98419359e59","Type":"ContainerDied","Data":"68a1c7ecf99aa7131270542b6728084772a813c4651e665b21552305c5857512"} Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.948559 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68a1c7ecf99aa7131270542b6728084772a813c4651e665b21552305c5857512" Jan 26 13:04:51 crc kubenswrapper[4881]: I0126 13:04:51.948595 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.041042 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk"] Jan 26 13:04:52 crc kubenswrapper[4881]: E0126 13:04:52.041457 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aaf2506-3e1b-4edd-af45-c98419359e59" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.041476 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aaf2506-3e1b-4edd-af45-c98419359e59" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.041676 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aaf2506-3e1b-4edd-af45-c98419359e59" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.042254 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.046483 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.046621 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.046990 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.060229 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.062808 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk"] Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.107173 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gjsdk\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.107424 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gjsdk\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.107550 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgzq5\" (UniqueName: \"kubernetes.io/projected/07589eac-a07d-4781-a341-cbe2e35872a4-kube-api-access-fgzq5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gjsdk\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.208759 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gjsdk\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.208861 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgzq5\" (UniqueName: \"kubernetes.io/projected/07589eac-a07d-4781-a341-cbe2e35872a4-kube-api-access-fgzq5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gjsdk\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.208979 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gjsdk\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.213036 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gjsdk\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.217120 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gjsdk\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.240167 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgzq5\" (UniqueName: \"kubernetes.io/projected/07589eac-a07d-4781-a341-cbe2e35872a4-kube-api-access-fgzq5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gjsdk\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.381099 4881 scope.go:117] "RemoveContainer" containerID="16c788f298df69084e3d5e43594a8f99ceaadc84ac8a99b306917fc30b8e4dda" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.381611 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.434132 4881 scope.go:117] "RemoveContainer" containerID="da38eba2ba81cd55099cfc1c7a17fc1b90857284abb696c7e5080c25a9bd0ea6" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.549076 4881 scope.go:117] "RemoveContainer" containerID="0a439d874dcffdca61343bbdbf25fac25e2ac6d8ea89b7838b2a6e891896aeee" Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.617886 4881 scope.go:117] "RemoveContainer" containerID="fd520d6bd78adb4ac5c7506b5231dfec9cbeb7c3df32e3b013567655502a5fbf" Jan 26 13:04:52 crc kubenswrapper[4881]: W0126 13:04:52.953339 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07589eac_a07d_4781_a341_cbe2e35872a4.slice/crio-d2e9c7903a65f7181cba8b5ba83c9e8f31d2c2a27a2732912de7321760bdb730 WatchSource:0}: Error finding container d2e9c7903a65f7181cba8b5ba83c9e8f31d2c2a27a2732912de7321760bdb730: Status 404 returned error can't find the container with id d2e9c7903a65f7181cba8b5ba83c9e8f31d2c2a27a2732912de7321760bdb730 Jan 26 13:04:52 crc kubenswrapper[4881]: I0126 13:04:52.953457 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk"] Jan 26 13:04:53 crc kubenswrapper[4881]: I0126 13:04:53.975422 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" event={"ID":"07589eac-a07d-4781-a341-cbe2e35872a4","Type":"ContainerStarted","Data":"d5e887c34337abb29ba98d4903e9775fdf6acada4e7ef093f720c0d71722743c"} Jan 26 13:04:53 crc kubenswrapper[4881]: I0126 13:04:53.975909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" event={"ID":"07589eac-a07d-4781-a341-cbe2e35872a4","Type":"ContainerStarted","Data":"d2e9c7903a65f7181cba8b5ba83c9e8f31d2c2a27a2732912de7321760bdb730"} Jan 26 13:04:54 crc kubenswrapper[4881]: I0126 13:04:54.000933 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" podStartSLOduration=1.354644049 podStartE2EDuration="2.000914124s" podCreationTimestamp="2026-01-26 13:04:52 +0000 UTC" firstStartedPulling="2026-01-26 13:04:52.957841072 +0000 UTC m=+1765.437151098" lastFinishedPulling="2026-01-26 13:04:53.604111107 +0000 UTC m=+1766.083421173" observedRunningTime="2026-01-26 13:04:53.995009045 +0000 UTC m=+1766.474319101" watchObservedRunningTime="2026-01-26 13:04:54.000914124 +0000 UTC m=+1766.480224150" Jan 26 13:04:57 crc kubenswrapper[4881]: I0126 13:04:57.010966 4881 generic.go:334] "Generic (PLEG): container finished" podID="07589eac-a07d-4781-a341-cbe2e35872a4" containerID="d5e887c34337abb29ba98d4903e9775fdf6acada4e7ef093f720c0d71722743c" exitCode=0 Jan 26 13:04:57 crc kubenswrapper[4881]: I0126 13:04:57.011014 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" event={"ID":"07589eac-a07d-4781-a341-cbe2e35872a4","Type":"ContainerDied","Data":"d5e887c34337abb29ba98d4903e9775fdf6acada4e7ef093f720c0d71722743c"} Jan 26 13:04:58 crc kubenswrapper[4881]: I0126 13:04:58.621947 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:58 crc kubenswrapper[4881]: I0126 13:04:58.648726 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-inventory\") pod \"07589eac-a07d-4781-a341-cbe2e35872a4\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " Jan 26 13:04:58 crc kubenswrapper[4881]: I0126 13:04:58.648816 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgzq5\" (UniqueName: \"kubernetes.io/projected/07589eac-a07d-4781-a341-cbe2e35872a4-kube-api-access-fgzq5\") pod \"07589eac-a07d-4781-a341-cbe2e35872a4\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " Jan 26 13:04:58 crc kubenswrapper[4881]: I0126 13:04:58.648944 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-ssh-key-openstack-edpm-ipam\") pod \"07589eac-a07d-4781-a341-cbe2e35872a4\" (UID: \"07589eac-a07d-4781-a341-cbe2e35872a4\") " Jan 26 13:04:58 crc kubenswrapper[4881]: I0126 13:04:58.660931 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07589eac-a07d-4781-a341-cbe2e35872a4-kube-api-access-fgzq5" (OuterVolumeSpecName: "kube-api-access-fgzq5") pod "07589eac-a07d-4781-a341-cbe2e35872a4" (UID: "07589eac-a07d-4781-a341-cbe2e35872a4"). InnerVolumeSpecName "kube-api-access-fgzq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:04:58 crc kubenswrapper[4881]: I0126 13:04:58.679246 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07589eac-a07d-4781-a341-cbe2e35872a4" (UID: "07589eac-a07d-4781-a341-cbe2e35872a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:04:58 crc kubenswrapper[4881]: I0126 13:04:58.683826 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-inventory" (OuterVolumeSpecName: "inventory") pod "07589eac-a07d-4781-a341-cbe2e35872a4" (UID: "07589eac-a07d-4781-a341-cbe2e35872a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:04:58 crc kubenswrapper[4881]: I0126 13:04:58.750558 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:58 crc kubenswrapper[4881]: I0126 13:04:58.750589 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07589eac-a07d-4781-a341-cbe2e35872a4-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:58 crc kubenswrapper[4881]: I0126 13:04:58.750598 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgzq5\" (UniqueName: \"kubernetes.io/projected/07589eac-a07d-4781-a341-cbe2e35872a4-kube-api-access-fgzq5\") on node \"crc\" DevicePath \"\"" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.037481 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" event={"ID":"07589eac-a07d-4781-a341-cbe2e35872a4","Type":"ContainerDied","Data":"d2e9c7903a65f7181cba8b5ba83c9e8f31d2c2a27a2732912de7321760bdb730"} Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.037543 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2e9c7903a65f7181cba8b5ba83c9e8f31d2c2a27a2732912de7321760bdb730" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.037548 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gjsdk" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.143752 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff"] Jan 26 13:04:59 crc kubenswrapper[4881]: E0126 13:04:59.144127 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07589eac-a07d-4781-a341-cbe2e35872a4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.144144 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="07589eac-a07d-4781-a341-cbe2e35872a4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.144365 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="07589eac-a07d-4781-a341-cbe2e35872a4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.145022 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.147166 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.147223 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.147225 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.148271 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.160065 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff"] Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.260285 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.260698 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.260767 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.260794 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkgfr\" (UniqueName: \"kubernetes.io/projected/d817df95-5b02-462d-86b6-289f9decf3d3-kube-api-access-qkgfr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.363167 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.363322 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.363347 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.363377 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkgfr\" (UniqueName: \"kubernetes.io/projected/d817df95-5b02-462d-86b6-289f9decf3d3-kube-api-access-qkgfr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.372946 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.373135 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.377441 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.390489 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkgfr\" (UniqueName: \"kubernetes.io/projected/d817df95-5b02-462d-86b6-289f9decf3d3-kube-api-access-qkgfr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:04:59 crc kubenswrapper[4881]: I0126 13:04:59.473660 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:05:00 crc kubenswrapper[4881]: I0126 13:05:00.067267 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff"] Jan 26 13:05:01 crc kubenswrapper[4881]: I0126 13:05:01.059565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" event={"ID":"d817df95-5b02-462d-86b6-289f9decf3d3","Type":"ContainerStarted","Data":"c256587822b7d955df7ad88256a8d309a96a8e828ab8a0676d5361cf81f52595"} Jan 26 13:05:01 crc kubenswrapper[4881]: I0126 13:05:01.059940 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" event={"ID":"d817df95-5b02-462d-86b6-289f9decf3d3","Type":"ContainerStarted","Data":"6e19115008d509d391b7bb32f53545f121742c5f9fae9468966b5bcaeb220651"} Jan 26 13:05:01 crc kubenswrapper[4881]: I0126 13:05:01.082857 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:05:01 crc kubenswrapper[4881]: E0126 13:05:01.083584 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:05:01 crc kubenswrapper[4881]: I0126 13:05:01.092957 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" podStartSLOduration=1.682108334 podStartE2EDuration="2.092933072s" podCreationTimestamp="2026-01-26 13:04:59 +0000 UTC" firstStartedPulling="2026-01-26 13:05:00.080729458 +0000 UTC m=+1772.560039514" lastFinishedPulling="2026-01-26 13:05:00.491554216 +0000 UTC m=+1772.970864252" observedRunningTime="2026-01-26 13:05:01.085483317 +0000 UTC m=+1773.564793353" watchObservedRunningTime="2026-01-26 13:05:01.092933072 +0000 UTC m=+1773.572243138" Jan 26 13:05:15 crc kubenswrapper[4881]: I0126 13:05:15.083691 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:05:15 crc kubenswrapper[4881]: E0126 13:05:15.084978 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:05:30 crc kubenswrapper[4881]: I0126 13:05:30.083872 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:05:30 crc kubenswrapper[4881]: E0126 13:05:30.085267 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:05:45 crc kubenswrapper[4881]: I0126 13:05:45.083656 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:05:45 crc kubenswrapper[4881]: E0126 13:05:45.084798 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:05:52 crc kubenswrapper[4881]: I0126 13:05:52.742009 4881 scope.go:117] "RemoveContainer" containerID="6a1c38c4b9acacbaed8099e6a11132e442fd80b20502374d87f5c65a3cf8a94a" Jan 26 13:05:52 crc kubenswrapper[4881]: I0126 13:05:52.775362 4881 scope.go:117] "RemoveContainer" containerID="585971640c82eeec51d4672a38687cebf4e71823e8caa145aebb5ea5be4ac0ac" Jan 26 13:05:52 crc kubenswrapper[4881]: I0126 13:05:52.862412 4881 scope.go:117] "RemoveContainer" containerID="216a2a795a5284e3fbc1a9d169045b67d442a3e7252b4aec3690c2243f4dc0d9" Jan 26 13:05:57 crc kubenswrapper[4881]: I0126 13:05:57.082445 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:05:57 crc kubenswrapper[4881]: E0126 13:05:57.083267 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:06:10 crc kubenswrapper[4881]: I0126 13:06:10.083036 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:06:10 crc kubenswrapper[4881]: E0126 13:06:10.083946 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:06:23 crc kubenswrapper[4881]: I0126 13:06:23.082676 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:06:23 crc kubenswrapper[4881]: E0126 13:06:23.083390 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:06:36 crc kubenswrapper[4881]: I0126 13:06:36.083981 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:06:36 crc kubenswrapper[4881]: E0126 13:06:36.085039 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:06:48 crc kubenswrapper[4881]: I0126 13:06:48.095893 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:06:48 crc kubenswrapper[4881]: E0126 13:06:48.096712 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:06:52 crc kubenswrapper[4881]: I0126 13:06:52.932294 4881 scope.go:117] "RemoveContainer" containerID="eaab8fbfe5cce69e6e4b69beaac271f2e2ad2bf6920445f85880548fd8ffc611" Jan 26 13:06:52 crc kubenswrapper[4881]: I0126 13:06:52.954554 4881 scope.go:117] "RemoveContainer" containerID="40ff402a56985b1666dcfc8811d8f475690409f2673e05e00beee846c2fcea06" Jan 26 13:07:01 crc kubenswrapper[4881]: I0126 13:07:01.083961 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:07:01 crc kubenswrapper[4881]: E0126 13:07:01.085147 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:07:13 crc kubenswrapper[4881]: I0126 13:07:13.083285 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:07:13 crc kubenswrapper[4881]: E0126 13:07:13.084399 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:07:24 crc kubenswrapper[4881]: I0126 13:07:24.083590 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:07:24 crc kubenswrapper[4881]: E0126 13:07:24.084543 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:07:36 crc kubenswrapper[4881]: I0126 13:07:36.090275 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:07:36 crc kubenswrapper[4881]: E0126 13:07:36.091369 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:07:48 crc kubenswrapper[4881]: I0126 13:07:48.099827 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:07:48 crc kubenswrapper[4881]: E0126 13:07:48.102053 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:08:00 crc kubenswrapper[4881]: I0126 13:08:00.083799 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:08:00 crc kubenswrapper[4881]: E0126 13:08:00.084553 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:08:13 crc kubenswrapper[4881]: I0126 13:08:13.082310 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:08:13 crc kubenswrapper[4881]: E0126 13:08:13.083294 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:08:24 crc kubenswrapper[4881]: I0126 13:08:24.083307 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:08:24 crc kubenswrapper[4881]: E0126 13:08:24.084567 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:08:25 crc kubenswrapper[4881]: I0126 13:08:25.047085 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-dzvpm"] Jan 26 13:08:25 crc kubenswrapper[4881]: I0126 13:08:25.056287 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-dzvpm"] Jan 26 13:08:26 crc kubenswrapper[4881]: I0126 13:08:26.073240 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-37af-account-create-update-5p95p"] Jan 26 13:08:26 crc kubenswrapper[4881]: I0126 13:08:26.095579 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a846b5-2564-476e-aed0-d658864b48cc" path="/var/lib/kubelet/pods/01a846b5-2564-476e-aed0-d658864b48cc/volumes" Jan 26 13:08:26 crc kubenswrapper[4881]: I0126 13:08:26.096865 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wd8nz"] Jan 26 13:08:26 crc kubenswrapper[4881]: I0126 13:08:26.101584 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-660e-account-create-update-6rxjm"] Jan 26 13:08:26 crc kubenswrapper[4881]: I0126 13:08:26.112353 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-37af-account-create-update-5p95p"] Jan 26 13:08:26 crc kubenswrapper[4881]: I0126 13:08:26.138018 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-b46kb"] Jan 26 13:08:26 crc kubenswrapper[4881]: I0126 13:08:26.148954 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wd8nz"] Jan 26 13:08:26 crc kubenswrapper[4881]: I0126 13:08:26.157921 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-660e-account-create-update-6rxjm"] Jan 26 13:08:26 crc kubenswrapper[4881]: I0126 13:08:26.166682 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-b46kb"] Jan 26 13:08:27 crc kubenswrapper[4881]: I0126 13:08:27.031769 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d805-account-create-update-r5zgz"] Jan 26 13:08:27 crc kubenswrapper[4881]: I0126 13:08:27.041044 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d805-account-create-update-r5zgz"] Jan 26 13:08:28 crc kubenswrapper[4881]: I0126 13:08:28.104577 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb1e605-5606-436c-8095-47263b851c49" path="/var/lib/kubelet/pods/2cb1e605-5606-436c-8095-47263b851c49/volumes" Jan 26 13:08:28 crc kubenswrapper[4881]: I0126 13:08:28.106089 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ead8e57-515d-4130-a679-ee2a2a148e45" path="/var/lib/kubelet/pods/4ead8e57-515d-4130-a679-ee2a2a148e45/volumes" Jan 26 13:08:28 crc kubenswrapper[4881]: I0126 13:08:28.107672 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b4c479a-9fc9-4021-9557-c85b90ee39a3" path="/var/lib/kubelet/pods/6b4c479a-9fc9-4021-9557-c85b90ee39a3/volumes" Jan 26 13:08:28 crc kubenswrapper[4881]: I0126 13:08:28.109121 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad31881f-a01e-473c-a495-612e76bf3ecf" path="/var/lib/kubelet/pods/ad31881f-a01e-473c-a495-612e76bf3ecf/volumes" Jan 26 13:08:28 crc kubenswrapper[4881]: I0126 13:08:28.111296 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7efba59-5af3-4544-9121-8a8a88859aea" path="/var/lib/kubelet/pods/d7efba59-5af3-4544-9121-8a8a88859aea/volumes" Jan 26 13:08:30 crc kubenswrapper[4881]: I0126 13:08:30.552219 4881 generic.go:334] "Generic (PLEG): container finished" podID="d817df95-5b02-462d-86b6-289f9decf3d3" containerID="c256587822b7d955df7ad88256a8d309a96a8e828ab8a0676d5361cf81f52595" exitCode=0 Jan 26 13:08:30 crc kubenswrapper[4881]: I0126 13:08:30.552754 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" event={"ID":"d817df95-5b02-462d-86b6-289f9decf3d3","Type":"ContainerDied","Data":"c256587822b7d955df7ad88256a8d309a96a8e828ab8a0676d5361cf81f52595"} Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.080318 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.176576 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkgfr\" (UniqueName: \"kubernetes.io/projected/d817df95-5b02-462d-86b6-289f9decf3d3-kube-api-access-qkgfr\") pod \"d817df95-5b02-462d-86b6-289f9decf3d3\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.177639 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-bootstrap-combined-ca-bundle\") pod \"d817df95-5b02-462d-86b6-289f9decf3d3\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.178073 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-ssh-key-openstack-edpm-ipam\") pod \"d817df95-5b02-462d-86b6-289f9decf3d3\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.178103 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-inventory\") pod \"d817df95-5b02-462d-86b6-289f9decf3d3\" (UID: \"d817df95-5b02-462d-86b6-289f9decf3d3\") " Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.184277 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d817df95-5b02-462d-86b6-289f9decf3d3-kube-api-access-qkgfr" (OuterVolumeSpecName: "kube-api-access-qkgfr") pod "d817df95-5b02-462d-86b6-289f9decf3d3" (UID: "d817df95-5b02-462d-86b6-289f9decf3d3"). InnerVolumeSpecName "kube-api-access-qkgfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.196976 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d817df95-5b02-462d-86b6-289f9decf3d3" (UID: "d817df95-5b02-462d-86b6-289f9decf3d3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.211638 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-inventory" (OuterVolumeSpecName: "inventory") pod "d817df95-5b02-462d-86b6-289f9decf3d3" (UID: "d817df95-5b02-462d-86b6-289f9decf3d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.215862 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d817df95-5b02-462d-86b6-289f9decf3d3" (UID: "d817df95-5b02-462d-86b6-289f9decf3d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.280426 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkgfr\" (UniqueName: \"kubernetes.io/projected/d817df95-5b02-462d-86b6-289f9decf3d3-kube-api-access-qkgfr\") on node \"crc\" DevicePath \"\"" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.280454 4881 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.280463 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.280472 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d817df95-5b02-462d-86b6-289f9decf3d3-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.575981 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" event={"ID":"d817df95-5b02-462d-86b6-289f9decf3d3","Type":"ContainerDied","Data":"6e19115008d509d391b7bb32f53545f121742c5f9fae9468966b5bcaeb220651"} Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.576023 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e19115008d509d391b7bb32f53545f121742c5f9fae9468966b5bcaeb220651" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.576039 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.673745 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw"] Jan 26 13:08:32 crc kubenswrapper[4881]: E0126 13:08:32.674353 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d817df95-5b02-462d-86b6-289f9decf3d3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.674382 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d817df95-5b02-462d-86b6-289f9decf3d3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.674688 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d817df95-5b02-462d-86b6-289f9decf3d3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.675364 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.683982 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.684176 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.684196 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw"] Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.684305 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.684344 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.791620 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-brjhw\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.791738 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w85hw\" (UniqueName: \"kubernetes.io/projected/b4d6e825-d231-4128-bd5d-3db56fbef5ec-kube-api-access-w85hw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-brjhw\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.791773 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-brjhw\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.893496 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-brjhw\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.893785 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w85hw\" (UniqueName: \"kubernetes.io/projected/b4d6e825-d231-4128-bd5d-3db56fbef5ec-kube-api-access-w85hw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-brjhw\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.894697 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-brjhw\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.900063 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-brjhw\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.900233 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-brjhw\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:32 crc kubenswrapper[4881]: I0126 13:08:32.920285 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w85hw\" (UniqueName: \"kubernetes.io/projected/b4d6e825-d231-4128-bd5d-3db56fbef5ec-kube-api-access-w85hw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-brjhw\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:33 crc kubenswrapper[4881]: I0126 13:08:33.006399 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:08:34 crc kubenswrapper[4881]: I0126 13:08:34.583317 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw"] Jan 26 13:08:34 crc kubenswrapper[4881]: I0126 13:08:34.590292 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:08:34 crc kubenswrapper[4881]: I0126 13:08:34.601907 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" event={"ID":"b4d6e825-d231-4128-bd5d-3db56fbef5ec","Type":"ContainerStarted","Data":"2147039b9b629032153a25f0d9b52a6e58e7c01ea9e0c4b580c6800b2804181d"} Jan 26 13:08:36 crc kubenswrapper[4881]: I0126 13:08:36.082957 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:08:36 crc kubenswrapper[4881]: E0126 13:08:36.083930 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:08:36 crc kubenswrapper[4881]: I0126 13:08:36.632569 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" event={"ID":"b4d6e825-d231-4128-bd5d-3db56fbef5ec","Type":"ContainerStarted","Data":"bcbbc121004652dae111823c30757e1f9f4168745760d886561a03fbf8386232"} Jan 26 13:08:36 crc kubenswrapper[4881]: I0126 13:08:36.657044 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" podStartSLOduration=3.607841681 podStartE2EDuration="4.657015267s" podCreationTimestamp="2026-01-26 13:08:32 +0000 UTC" firstStartedPulling="2026-01-26 13:08:34.590047156 +0000 UTC m=+1987.069357182" lastFinishedPulling="2026-01-26 13:08:35.639220732 +0000 UTC m=+1988.118530768" observedRunningTime="2026-01-26 13:08:36.653959963 +0000 UTC m=+1989.133269999" watchObservedRunningTime="2026-01-26 13:08:36.657015267 +0000 UTC m=+1989.136325323" Jan 26 13:08:47 crc kubenswrapper[4881]: I0126 13:08:47.083616 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:08:47 crc kubenswrapper[4881]: E0126 13:08:47.084686 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:08:49 crc kubenswrapper[4881]: I0126 13:08:49.075289 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-47fkz"] Jan 26 13:08:49 crc kubenswrapper[4881]: I0126 13:08:49.087371 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-47fkz"] Jan 26 13:08:50 crc kubenswrapper[4881]: I0126 13:08:50.096203 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e90465-1c1a-409f-b312-85859d8b0a52" path="/var/lib/kubelet/pods/12e90465-1c1a-409f-b312-85859d8b0a52/volumes" Jan 26 13:08:53 crc kubenswrapper[4881]: I0126 13:08:53.030587 4881 scope.go:117] "RemoveContainer" containerID="8bb3e925f67509b4bf3e46fc3adf2e07a3cbfe351f33b717e77d82d679f6defe" Jan 26 13:08:53 crc kubenswrapper[4881]: I0126 13:08:53.065642 4881 scope.go:117] "RemoveContainer" containerID="86bb777ab8e7552d57653bd091314db34a81fa654abe75fe9120c0df42d3eca0" Jan 26 13:08:53 crc kubenswrapper[4881]: I0126 13:08:53.135545 4881 scope.go:117] "RemoveContainer" containerID="0af25594a404692287871a6ca5b698ccd06291967a2d575c4dd54b27fafdf611" Jan 26 13:08:53 crc kubenswrapper[4881]: I0126 13:08:53.188819 4881 scope.go:117] "RemoveContainer" containerID="2a426c0f8f27813bb00129d76f29658873e5726cbe7e41c70fce685a597a0fb2" Jan 26 13:08:53 crc kubenswrapper[4881]: I0126 13:08:53.240360 4881 scope.go:117] "RemoveContainer" containerID="90d07ff7c7b86b3c241fd96dc9de8bed049bbf6f08f370febdd263fd653de7ce" Jan 26 13:08:53 crc kubenswrapper[4881]: I0126 13:08:53.300837 4881 scope.go:117] "RemoveContainer" containerID="505a889e439fd7438d7783206924e15a93e5af2528f0d6e63446242b47c433b2" Jan 26 13:08:53 crc kubenswrapper[4881]: I0126 13:08:53.342405 4881 scope.go:117] "RemoveContainer" containerID="f73cbde60c8c592e4bfaa86c59e453cd9e1d63c0f1c3d8a0da2c2b1ee487f975" Jan 26 13:09:02 crc kubenswrapper[4881]: I0126 13:09:02.083112 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:09:02 crc kubenswrapper[4881]: I0126 13:09:02.934729 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"7750edf22d4cbb66ace9f47e5c6d27c40449613083712f392ec90e1aada14c4a"} Jan 26 13:09:04 crc kubenswrapper[4881]: I0126 13:09:04.044946 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-96f0-account-create-update-lqrtl"] Jan 26 13:09:04 crc kubenswrapper[4881]: I0126 13:09:04.060040 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kzz6j"] Jan 26 13:09:04 crc kubenswrapper[4881]: I0126 13:09:04.071069 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-24db-account-create-update-2mfcv"] Jan 26 13:09:04 crc kubenswrapper[4881]: I0126 13:09:04.078756 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-87bfw"] Jan 26 13:09:04 crc kubenswrapper[4881]: I0126 13:09:04.132349 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-24db-account-create-update-2mfcv"] Jan 26 13:09:04 crc kubenswrapper[4881]: I0126 13:09:04.132386 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-96f0-account-create-update-lqrtl"] Jan 26 13:09:04 crc kubenswrapper[4881]: I0126 13:09:04.132398 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kzz6j"] Jan 26 13:09:04 crc kubenswrapper[4881]: I0126 13:09:04.132409 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-87bfw"] Jan 26 13:09:06 crc kubenswrapper[4881]: I0126 13:09:06.096263 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27ac0661-2a31-41bb-9dad-adee7c8dddf5" path="/var/lib/kubelet/pods/27ac0661-2a31-41bb-9dad-adee7c8dddf5/volumes" Jan 26 13:09:06 crc kubenswrapper[4881]: I0126 13:09:06.097767 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329" path="/var/lib/kubelet/pods/413e9cd5-53f0-4aa0-b58f-2dd1c3d4a329/volumes" Jan 26 13:09:06 crc kubenswrapper[4881]: I0126 13:09:06.098444 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79b83426-8fd6-49cd-8788-b4f7c0bb2216" path="/var/lib/kubelet/pods/79b83426-8fd6-49cd-8788-b4f7c0bb2216/volumes" Jan 26 13:09:06 crc kubenswrapper[4881]: I0126 13:09:06.099235 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a8b530f-4ae9-45a2-9a70-bba160dec46c" path="/var/lib/kubelet/pods/8a8b530f-4ae9-45a2-9a70-bba160dec46c/volumes" Jan 26 13:09:10 crc kubenswrapper[4881]: I0126 13:09:10.049333 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x5w44"] Jan 26 13:09:10 crc kubenswrapper[4881]: I0126 13:09:10.067705 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x5w44"] Jan 26 13:09:10 crc kubenswrapper[4881]: I0126 13:09:10.098504 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817c084d-f62f-49b2-8482-e37c799af743" path="/var/lib/kubelet/pods/817c084d-f62f-49b2-8482-e37c799af743/volumes" Jan 26 13:09:14 crc kubenswrapper[4881]: I0126 13:09:14.049953 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ddrsw"] Jan 26 13:09:14 crc kubenswrapper[4881]: I0126 13:09:14.066899 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ad3b-account-create-update-cf2rd"] Jan 26 13:09:14 crc kubenswrapper[4881]: I0126 13:09:14.078110 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ddrsw"] Jan 26 13:09:14 crc kubenswrapper[4881]: I0126 13:09:14.115565 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87493a10-28dd-468d-82df-d225543ffd0e" path="/var/lib/kubelet/pods/87493a10-28dd-468d-82df-d225543ffd0e/volumes" Jan 26 13:09:14 crc kubenswrapper[4881]: I0126 13:09:14.116398 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ad3b-account-create-update-cf2rd"] Jan 26 13:09:15 crc kubenswrapper[4881]: I0126 13:09:15.049612 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2sp7f"] Jan 26 13:09:15 crc kubenswrapper[4881]: I0126 13:09:15.068618 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2sp7f"] Jan 26 13:09:16 crc kubenswrapper[4881]: I0126 13:09:16.054854 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-445e-account-create-update-hgkn9"] Jan 26 13:09:16 crc kubenswrapper[4881]: I0126 13:09:16.065577 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-445e-account-create-update-hgkn9"] Jan 26 13:09:16 crc kubenswrapper[4881]: I0126 13:09:16.103142 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18188e05-15cd-421d-9ad4-a68243fa2d84" path="/var/lib/kubelet/pods/18188e05-15cd-421d-9ad4-a68243fa2d84/volumes" Jan 26 13:09:16 crc kubenswrapper[4881]: I0126 13:09:16.104050 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8395a76c-6569-43c6-ba18-438efdb98980" path="/var/lib/kubelet/pods/8395a76c-6569-43c6-ba18-438efdb98980/volumes" Jan 26 13:09:16 crc kubenswrapper[4881]: I0126 13:09:16.104843 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a3edaef-3ff0-45b1-b037-ca545d1bd9af" path="/var/lib/kubelet/pods/8a3edaef-3ff0-45b1-b037-ca545d1bd9af/volumes" Jan 26 13:09:53 crc kubenswrapper[4881]: I0126 13:09:53.530473 4881 scope.go:117] "RemoveContainer" containerID="f70b08bc1805fc4ccd041611f50ce229a8150aded2efe88707bab5a05b6aee68" Jan 26 13:09:53 crc kubenswrapper[4881]: I0126 13:09:53.561040 4881 scope.go:117] "RemoveContainer" containerID="fe28d77e3a215a8b83779ab2b04599b91d3b63debd71596cf507483db235d1d1" Jan 26 13:09:53 crc kubenswrapper[4881]: I0126 13:09:53.615584 4881 scope.go:117] "RemoveContainer" containerID="43de24a7d1b0034ce8c703e1f91de72dda23259326e9e287d3de168b7104d5c3" Jan 26 13:09:53 crc kubenswrapper[4881]: I0126 13:09:53.668594 4881 scope.go:117] "RemoveContainer" containerID="41abfaf409a233b07eeecd7c94fc06887a401e865fdbbafd767a35924a9c1d2f" Jan 26 13:09:53 crc kubenswrapper[4881]: I0126 13:09:53.713357 4881 scope.go:117] "RemoveContainer" containerID="4c3c13247a0ecc4fdb4f19bec6e9968d0a020c4dad3ad125a98a30a0669473f2" Jan 26 13:09:53 crc kubenswrapper[4881]: I0126 13:09:53.756671 4881 scope.go:117] "RemoveContainer" containerID="bebedd510aaa4fdc24d4500b04b59fb6022593d94c5a9ee0e91f7ad4de2bb9d1" Jan 26 13:09:53 crc kubenswrapper[4881]: I0126 13:09:53.800766 4881 scope.go:117] "RemoveContainer" containerID="9e3689c99cb55c2a380e1c7d9d475b86b1b65c071da7533e6ea5cce0bb2bf71f" Jan 26 13:09:53 crc kubenswrapper[4881]: I0126 13:09:53.825594 4881 scope.go:117] "RemoveContainer" containerID="f7814b853c18f408a846236c0f55207ac7fe2c9e1d2dcf4cfb564c03ac5621dc" Jan 26 13:09:53 crc kubenswrapper[4881]: I0126 13:09:53.857034 4881 scope.go:117] "RemoveContainer" containerID="3e43e683d4f86cfa99637f44e3d2a0f8278f06332d914db1dfaabe6205cf4905" Jan 26 13:10:09 crc kubenswrapper[4881]: I0126 13:10:09.101289 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wmvz7"] Jan 26 13:10:09 crc kubenswrapper[4881]: I0126 13:10:09.107818 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-t85k4"] Jan 26 13:10:09 crc kubenswrapper[4881]: I0126 13:10:09.116551 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tqpzh"] Jan 26 13:10:09 crc kubenswrapper[4881]: I0126 13:10:09.124938 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wmvz7"] Jan 26 13:10:09 crc kubenswrapper[4881]: I0126 13:10:09.134490 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-t85k4"] Jan 26 13:10:09 crc kubenswrapper[4881]: I0126 13:10:09.142608 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tqpzh"] Jan 26 13:10:10 crc kubenswrapper[4881]: I0126 13:10:10.107084 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="741cf6ae-617a-440b-b6ec-63dc4e87ff4a" path="/var/lib/kubelet/pods/741cf6ae-617a-440b-b6ec-63dc4e87ff4a/volumes" Jan 26 13:10:10 crc kubenswrapper[4881]: I0126 13:10:10.110065 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1a489b-0795-4817-ad32-7fdf1ea68559" path="/var/lib/kubelet/pods/ce1a489b-0795-4817-ad32-7fdf1ea68559/volumes" Jan 26 13:10:10 crc kubenswrapper[4881]: I0126 13:10:10.111430 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0fc8471-ad65-44cf-bf03-1c037aafdf11" path="/var/lib/kubelet/pods/d0fc8471-ad65-44cf-bf03-1c037aafdf11/volumes" Jan 26 13:10:19 crc kubenswrapper[4881]: I0126 13:10:19.045128 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-5pg9p"] Jan 26 13:10:19 crc kubenswrapper[4881]: I0126 13:10:19.062907 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-5pg9p"] Jan 26 13:10:20 crc kubenswrapper[4881]: I0126 13:10:20.104310 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf01549-e1d0-46a7-a141-bdc0f5c81458" path="/var/lib/kubelet/pods/adf01549-e1d0-46a7-a141-bdc0f5c81458/volumes" Jan 26 13:10:24 crc kubenswrapper[4881]: I0126 13:10:24.045893 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-lgk7s"] Jan 26 13:10:24 crc kubenswrapper[4881]: I0126 13:10:24.063735 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-lgk7s"] Jan 26 13:10:24 crc kubenswrapper[4881]: I0126 13:10:24.105866 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="132298e2-a2f4-4311-9f7a-3e4e08abe34b" path="/var/lib/kubelet/pods/132298e2-a2f4-4311-9f7a-3e4e08abe34b/volumes" Jan 26 13:10:39 crc kubenswrapper[4881]: I0126 13:10:39.060335 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hmsfj"] Jan 26 13:10:39 crc kubenswrapper[4881]: I0126 13:10:39.070200 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hmsfj"] Jan 26 13:10:40 crc kubenswrapper[4881]: I0126 13:10:40.038998 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nmggw"] Jan 26 13:10:40 crc kubenswrapper[4881]: I0126 13:10:40.049181 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nmggw"] Jan 26 13:10:40 crc kubenswrapper[4881]: I0126 13:10:40.095442 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a410393d-b0c5-45bf-b9f7-897ad16759d4" path="/var/lib/kubelet/pods/a410393d-b0c5-45bf-b9f7-897ad16759d4/volumes" Jan 26 13:10:40 crc kubenswrapper[4881]: I0126 13:10:40.096724 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf634913-5017-4a94-a3e7-0c337bb9fb4d" path="/var/lib/kubelet/pods/cf634913-5017-4a94-a3e7-0c337bb9fb4d/volumes" Jan 26 13:10:40 crc kubenswrapper[4881]: I0126 13:10:40.107299 4881 generic.go:334] "Generic (PLEG): container finished" podID="b4d6e825-d231-4128-bd5d-3db56fbef5ec" containerID="bcbbc121004652dae111823c30757e1f9f4168745760d886561a03fbf8386232" exitCode=0 Jan 26 13:10:40 crc kubenswrapper[4881]: I0126 13:10:40.107358 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" event={"ID":"b4d6e825-d231-4128-bd5d-3db56fbef5ec","Type":"ContainerDied","Data":"bcbbc121004652dae111823c30757e1f9f4168745760d886561a03fbf8386232"} Jan 26 13:10:41 crc kubenswrapper[4881]: I0126 13:10:41.967417 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.107589 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-inventory\") pod \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.107790 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-ssh-key-openstack-edpm-ipam\") pod \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.108866 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w85hw\" (UniqueName: \"kubernetes.io/projected/b4d6e825-d231-4128-bd5d-3db56fbef5ec-kube-api-access-w85hw\") pod \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\" (UID: \"b4d6e825-d231-4128-bd5d-3db56fbef5ec\") " Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.116893 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d6e825-d231-4128-bd5d-3db56fbef5ec-kube-api-access-w85hw" (OuterVolumeSpecName: "kube-api-access-w85hw") pod "b4d6e825-d231-4128-bd5d-3db56fbef5ec" (UID: "b4d6e825-d231-4128-bd5d-3db56fbef5ec"). InnerVolumeSpecName "kube-api-access-w85hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.130234 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.130274 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-brjhw" event={"ID":"b4d6e825-d231-4128-bd5d-3db56fbef5ec","Type":"ContainerDied","Data":"2147039b9b629032153a25f0d9b52a6e58e7c01ea9e0c4b580c6800b2804181d"} Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.130316 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2147039b9b629032153a25f0d9b52a6e58e7c01ea9e0c4b580c6800b2804181d" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.191742 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b4d6e825-d231-4128-bd5d-3db56fbef5ec" (UID: "b4d6e825-d231-4128-bd5d-3db56fbef5ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.194883 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-inventory" (OuterVolumeSpecName: "inventory") pod "b4d6e825-d231-4128-bd5d-3db56fbef5ec" (UID: "b4d6e825-d231-4128-bd5d-3db56fbef5ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.230869 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w85hw\" (UniqueName: \"kubernetes.io/projected/b4d6e825-d231-4128-bd5d-3db56fbef5ec-kube-api-access-w85hw\") on node \"crc\" DevicePath \"\"" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.230913 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.230952 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4d6e825-d231-4128-bd5d-3db56fbef5ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.256302 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp"] Jan 26 13:10:42 crc kubenswrapper[4881]: E0126 13:10:42.256773 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d6e825-d231-4128-bd5d-3db56fbef5ec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.256791 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d6e825-d231-4128-bd5d-3db56fbef5ec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.257002 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d6e825-d231-4128-bd5d-3db56fbef5ec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.257721 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.272925 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp"] Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.332688 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2rs\" (UniqueName: \"kubernetes.io/projected/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-kube-api-access-4r2rs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.332792 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.332838 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.436167 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.436260 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.436433 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r2rs\" (UniqueName: \"kubernetes.io/projected/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-kube-api-access-4r2rs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.441267 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.441380 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.459484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r2rs\" (UniqueName: \"kubernetes.io/projected/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-kube-api-access-4r2rs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:42 crc kubenswrapper[4881]: I0126 13:10:42.611965 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:10:43 crc kubenswrapper[4881]: I0126 13:10:43.165296 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp"] Jan 26 13:10:44 crc kubenswrapper[4881]: I0126 13:10:44.149192 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" event={"ID":"d828b00c-a7ee-47c8-b98a-2529ccf16cc6","Type":"ContainerStarted","Data":"0ae7f6ea6913897e76e0c8c335e1a3c392d6d52c7cb2f678d9e6a3adb7435404"} Jan 26 13:10:44 crc kubenswrapper[4881]: I0126 13:10:44.149466 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" event={"ID":"d828b00c-a7ee-47c8-b98a-2529ccf16cc6","Type":"ContainerStarted","Data":"ea9da090c81f082b78d13e123cb7caaaaee55b4b7b3e6db160f18f68e4dae41c"} Jan 26 13:10:44 crc kubenswrapper[4881]: I0126 13:10:44.165665 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" podStartSLOduration=1.7663412360000001 podStartE2EDuration="2.165650102s" podCreationTimestamp="2026-01-26 13:10:42 +0000 UTC" firstStartedPulling="2026-01-26 13:10:43.17883673 +0000 UTC m=+2115.658146756" lastFinishedPulling="2026-01-26 13:10:43.578145556 +0000 UTC m=+2116.057455622" observedRunningTime="2026-01-26 13:10:44.164018663 +0000 UTC m=+2116.643328709" watchObservedRunningTime="2026-01-26 13:10:44.165650102 +0000 UTC m=+2116.644960128" Jan 26 13:10:54 crc kubenswrapper[4881]: I0126 13:10:54.033304 4881 scope.go:117] "RemoveContainer" containerID="1571eab50c624a5e28dcac81856d0e028f5740073bc1caf0ff053add137d39b6" Jan 26 13:10:54 crc kubenswrapper[4881]: I0126 13:10:54.218482 4881 scope.go:117] "RemoveContainer" containerID="7c4f86fa4c3c9b13b85178d0fb4974e800c168588a5a7177759de691d923a06a" Jan 26 13:10:54 crc kubenswrapper[4881]: I0126 13:10:54.274803 4881 scope.go:117] "RemoveContainer" containerID="575479e07039a478f0162256f3814334444e55407e26307b5894052039080d79" Jan 26 13:10:54 crc kubenswrapper[4881]: I0126 13:10:54.320851 4881 scope.go:117] "RemoveContainer" containerID="41785da4333665e17bd7fbffdd5da2933857073e2ad8332d4ec8db10278f0fac" Jan 26 13:10:54 crc kubenswrapper[4881]: I0126 13:10:54.363396 4881 scope.go:117] "RemoveContainer" containerID="19567a9db0844a952d781ac403219913683d6c92d32d850768773e9cc2919707" Jan 26 13:10:54 crc kubenswrapper[4881]: I0126 13:10:54.408252 4881 scope.go:117] "RemoveContainer" containerID="bd0fbc078d487b6644022e19f21a266beb30fb96b50007588c33cb0891afcb86" Jan 26 13:10:54 crc kubenswrapper[4881]: I0126 13:10:54.449027 4881 scope.go:117] "RemoveContainer" containerID="22a5fcab15992d9e4368d136726c23e8a26a70e1fe25bd5a0a19b101d471cdff" Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.714959 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zx76t"] Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.718179 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.727469 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zx76t"] Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.836860 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-utilities\") pod \"certified-operators-zx76t\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.836970 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-catalog-content\") pod \"certified-operators-zx76t\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.837135 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp95p\" (UniqueName: \"kubernetes.io/projected/53c4ab87-65d4-4bf3-9264-ae26135e48cf-kube-api-access-bp95p\") pod \"certified-operators-zx76t\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.939803 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-utilities\") pod \"certified-operators-zx76t\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.939965 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-catalog-content\") pod \"certified-operators-zx76t\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.940083 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp95p\" (UniqueName: \"kubernetes.io/projected/53c4ab87-65d4-4bf3-9264-ae26135e48cf-kube-api-access-bp95p\") pod \"certified-operators-zx76t\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.940899 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-utilities\") pod \"certified-operators-zx76t\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.941306 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-catalog-content\") pod \"certified-operators-zx76t\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:01 crc kubenswrapper[4881]: I0126 13:11:01.967477 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp95p\" (UniqueName: \"kubernetes.io/projected/53c4ab87-65d4-4bf3-9264-ae26135e48cf-kube-api-access-bp95p\") pod \"certified-operators-zx76t\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:02 crc kubenswrapper[4881]: I0126 13:11:02.047140 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:02 crc kubenswrapper[4881]: I0126 13:11:02.637818 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zx76t"] Jan 26 13:11:03 crc kubenswrapper[4881]: I0126 13:11:03.371234 4881 generic.go:334] "Generic (PLEG): container finished" podID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerID="947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe" exitCode=0 Jan 26 13:11:03 crc kubenswrapper[4881]: I0126 13:11:03.371313 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx76t" event={"ID":"53c4ab87-65d4-4bf3-9264-ae26135e48cf","Type":"ContainerDied","Data":"947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe"} Jan 26 13:11:03 crc kubenswrapper[4881]: I0126 13:11:03.371610 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx76t" event={"ID":"53c4ab87-65d4-4bf3-9264-ae26135e48cf","Type":"ContainerStarted","Data":"b13a23f617eacc145b376a14d0093caada9d2b409181b521631cde8b090a965c"} Jan 26 13:11:04 crc kubenswrapper[4881]: I0126 13:11:04.381278 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx76t" event={"ID":"53c4ab87-65d4-4bf3-9264-ae26135e48cf","Type":"ContainerStarted","Data":"d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45"} Jan 26 13:11:05 crc kubenswrapper[4881]: I0126 13:11:05.395917 4881 generic.go:334] "Generic (PLEG): container finished" podID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerID="d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45" exitCode=0 Jan 26 13:11:05 crc kubenswrapper[4881]: I0126 13:11:05.395989 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx76t" event={"ID":"53c4ab87-65d4-4bf3-9264-ae26135e48cf","Type":"ContainerDied","Data":"d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45"} Jan 26 13:11:06 crc kubenswrapper[4881]: I0126 13:11:06.435686 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx76t" event={"ID":"53c4ab87-65d4-4bf3-9264-ae26135e48cf","Type":"ContainerStarted","Data":"a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735"} Jan 26 13:11:06 crc kubenswrapper[4881]: I0126 13:11:06.452285 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zx76t" podStartSLOduration=3.021720017 podStartE2EDuration="5.452269919s" podCreationTimestamp="2026-01-26 13:11:01 +0000 UTC" firstStartedPulling="2026-01-26 13:11:03.373753801 +0000 UTC m=+2135.853063827" lastFinishedPulling="2026-01-26 13:11:05.804303683 +0000 UTC m=+2138.283613729" observedRunningTime="2026-01-26 13:11:06.450959577 +0000 UTC m=+2138.930269623" watchObservedRunningTime="2026-01-26 13:11:06.452269919 +0000 UTC m=+2138.931579945" Jan 26 13:11:12 crc kubenswrapper[4881]: I0126 13:11:12.048333 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:12 crc kubenswrapper[4881]: I0126 13:11:12.048959 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:12 crc kubenswrapper[4881]: I0126 13:11:12.103291 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:12 crc kubenswrapper[4881]: I0126 13:11:12.603600 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:12 crc kubenswrapper[4881]: I0126 13:11:12.670362 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zx76t"] Jan 26 13:11:14 crc kubenswrapper[4881]: I0126 13:11:14.547385 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zx76t" podUID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerName="registry-server" containerID="cri-o://a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735" gracePeriod=2 Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.058333 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.213397 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp95p\" (UniqueName: \"kubernetes.io/projected/53c4ab87-65d4-4bf3-9264-ae26135e48cf-kube-api-access-bp95p\") pod \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.213492 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-catalog-content\") pod \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.213695 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-utilities\") pod \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\" (UID: \"53c4ab87-65d4-4bf3-9264-ae26135e48cf\") " Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.214772 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-utilities" (OuterVolumeSpecName: "utilities") pod "53c4ab87-65d4-4bf3-9264-ae26135e48cf" (UID: "53c4ab87-65d4-4bf3-9264-ae26135e48cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.222679 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c4ab87-65d4-4bf3-9264-ae26135e48cf-kube-api-access-bp95p" (OuterVolumeSpecName: "kube-api-access-bp95p") pod "53c4ab87-65d4-4bf3-9264-ae26135e48cf" (UID: "53c4ab87-65d4-4bf3-9264-ae26135e48cf"). InnerVolumeSpecName "kube-api-access-bp95p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.293814 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53c4ab87-65d4-4bf3-9264-ae26135e48cf" (UID: "53c4ab87-65d4-4bf3-9264-ae26135e48cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.317679 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp95p\" (UniqueName: \"kubernetes.io/projected/53c4ab87-65d4-4bf3-9264-ae26135e48cf-kube-api-access-bp95p\") on node \"crc\" DevicePath \"\"" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.317719 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.317732 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c4ab87-65d4-4bf3-9264-ae26135e48cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.558540 4881 generic.go:334] "Generic (PLEG): container finished" podID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerID="a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735" exitCode=0 Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.558594 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zx76t" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.558610 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx76t" event={"ID":"53c4ab87-65d4-4bf3-9264-ae26135e48cf","Type":"ContainerDied","Data":"a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735"} Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.558661 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx76t" event={"ID":"53c4ab87-65d4-4bf3-9264-ae26135e48cf","Type":"ContainerDied","Data":"b13a23f617eacc145b376a14d0093caada9d2b409181b521631cde8b090a965c"} Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.558688 4881 scope.go:117] "RemoveContainer" containerID="a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.599344 4881 scope.go:117] "RemoveContainer" containerID="d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.618660 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zx76t"] Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.631087 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zx76t"] Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.633194 4881 scope.go:117] "RemoveContainer" containerID="947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.681896 4881 scope.go:117] "RemoveContainer" containerID="a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735" Jan 26 13:11:15 crc kubenswrapper[4881]: E0126 13:11:15.682758 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735\": container with ID starting with a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735 not found: ID does not exist" containerID="a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.682843 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735"} err="failed to get container status \"a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735\": rpc error: code = NotFound desc = could not find container \"a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735\": container with ID starting with a760237dacfb07a025dfcf450e3cb4c1a1f692308936e477f19cd688e60ae735 not found: ID does not exist" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.682897 4881 scope.go:117] "RemoveContainer" containerID="d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45" Jan 26 13:11:15 crc kubenswrapper[4881]: E0126 13:11:15.683646 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45\": container with ID starting with d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45 not found: ID does not exist" containerID="d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.683734 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45"} err="failed to get container status \"d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45\": rpc error: code = NotFound desc = could not find container \"d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45\": container with ID starting with d29f0c6ebca31facd14482adc2202d42778750fc67d736a812971e3ef01a0b45 not found: ID does not exist" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.683773 4881 scope.go:117] "RemoveContainer" containerID="947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe" Jan 26 13:11:15 crc kubenswrapper[4881]: E0126 13:11:15.684337 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe\": container with ID starting with 947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe not found: ID does not exist" containerID="947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe" Jan 26 13:11:15 crc kubenswrapper[4881]: I0126 13:11:15.684367 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe"} err="failed to get container status \"947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe\": rpc error: code = NotFound desc = could not find container \"947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe\": container with ID starting with 947659509f8f5d28c4c81cd6c4eaa867d32716651e6a928dd5589abd7654aafe not found: ID does not exist" Jan 26 13:11:16 crc kubenswrapper[4881]: I0126 13:11:16.098218 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" path="/var/lib/kubelet/pods/53c4ab87-65d4-4bf3-9264-ae26135e48cf/volumes" Jan 26 13:11:18 crc kubenswrapper[4881]: I0126 13:11:18.055290 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-nzs96"] Jan 26 13:11:18 crc kubenswrapper[4881]: I0126 13:11:18.076877 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2kfpk"] Jan 26 13:11:18 crc kubenswrapper[4881]: I0126 13:11:18.095085 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-nzs96"] Jan 26 13:11:18 crc kubenswrapper[4881]: I0126 13:11:18.110837 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-kqmwz"] Jan 26 13:11:18 crc kubenswrapper[4881]: I0126 13:11:18.120220 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2kfpk"] Jan 26 13:11:18 crc kubenswrapper[4881]: I0126 13:11:18.127094 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d401-account-create-update-6knzf"] Jan 26 13:11:18 crc kubenswrapper[4881]: I0126 13:11:18.133705 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-kqmwz"] Jan 26 13:11:18 crc kubenswrapper[4881]: I0126 13:11:18.140401 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d401-account-create-update-6knzf"] Jan 26 13:11:19 crc kubenswrapper[4881]: I0126 13:11:19.053159 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0e1c-account-create-update-t8hhb"] Jan 26 13:11:19 crc kubenswrapper[4881]: I0126 13:11:19.062658 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0e1c-account-create-update-t8hhb"] Jan 26 13:11:19 crc kubenswrapper[4881]: I0126 13:11:19.074022 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-70ca-account-create-update-r686d"] Jan 26 13:11:19 crc kubenswrapper[4881]: I0126 13:11:19.085781 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-70ca-account-create-update-r686d"] Jan 26 13:11:20 crc kubenswrapper[4881]: I0126 13:11:20.099805 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4376f3bd-20c9-41f8-a1d1-eae76560d137" path="/var/lib/kubelet/pods/4376f3bd-20c9-41f8-a1d1-eae76560d137/volumes" Jan 26 13:11:20 crc kubenswrapper[4881]: I0126 13:11:20.100998 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66bc1d35-a96d-4cce-98be-5d65886c6f83" path="/var/lib/kubelet/pods/66bc1d35-a96d-4cce-98be-5d65886c6f83/volumes" Jan 26 13:11:20 crc kubenswrapper[4881]: I0126 13:11:20.102216 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96bc90dc-d5c2-412e-8ac8-a60fb254cd7e" path="/var/lib/kubelet/pods/96bc90dc-d5c2-412e-8ac8-a60fb254cd7e/volumes" Jan 26 13:11:20 crc kubenswrapper[4881]: I0126 13:11:20.103316 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98008dc5-f3ef-436d-af31-cec258fe5743" path="/var/lib/kubelet/pods/98008dc5-f3ef-436d-af31-cec258fe5743/volumes" Jan 26 13:11:20 crc kubenswrapper[4881]: I0126 13:11:20.105296 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9089843-0624-44fe-a41f-78746490b5be" path="/var/lib/kubelet/pods/c9089843-0624-44fe-a41f-78746490b5be/volumes" Jan 26 13:11:20 crc kubenswrapper[4881]: I0126 13:11:20.106413 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3bca628-c4a8-4a09-bc59-3b0f2627adf4" path="/var/lib/kubelet/pods/d3bca628-c4a8-4a09-bc59-3b0f2627adf4/volumes" Jan 26 13:11:24 crc kubenswrapper[4881]: I0126 13:11:24.789984 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:11:24 crc kubenswrapper[4881]: I0126 13:11:24.790650 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:11:49 crc kubenswrapper[4881]: I0126 13:11:49.051916 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vbjb6"] Jan 26 13:11:49 crc kubenswrapper[4881]: I0126 13:11:49.068910 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vbjb6"] Jan 26 13:11:50 crc kubenswrapper[4881]: I0126 13:11:50.096058 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0ba53f-5583-413c-bd4d-beb5b5c803ea" path="/var/lib/kubelet/pods/6e0ba53f-5583-413c-bd4d-beb5b5c803ea/volumes" Jan 26 13:11:54 crc kubenswrapper[4881]: I0126 13:11:54.631621 4881 scope.go:117] "RemoveContainer" containerID="89b12c9203c78ecd99ca06a673b823dcc2ab211b39c7d2451ee234f590e867db" Jan 26 13:11:54 crc kubenswrapper[4881]: I0126 13:11:54.659894 4881 scope.go:117] "RemoveContainer" containerID="6f30d0c94b89e9971cf376705fc4f984a75d6ab61254e0a573d568139d3f18a9" Jan 26 13:11:54 crc kubenswrapper[4881]: I0126 13:11:54.726414 4881 scope.go:117] "RemoveContainer" containerID="4f61cd5e59bc8b32b6617da04a49ec4937697b918b0b86e12a132c54e2b66346" Jan 26 13:11:54 crc kubenswrapper[4881]: I0126 13:11:54.788161 4881 scope.go:117] "RemoveContainer" containerID="4558be3a5c19909c4d4ff23b5349f75d511a9f14407815e6423933f0f927d7bd" Jan 26 13:11:54 crc kubenswrapper[4881]: I0126 13:11:54.789062 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:11:54 crc kubenswrapper[4881]: I0126 13:11:54.789109 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:11:54 crc kubenswrapper[4881]: I0126 13:11:54.859896 4881 scope.go:117] "RemoveContainer" containerID="b187ed03cef959cc7d3b78e596f9e215e4dbea8a9ca9f53bbaf12e2e28caa6fc" Jan 26 13:11:54 crc kubenswrapper[4881]: I0126 13:11:54.879750 4881 scope.go:117] "RemoveContainer" containerID="a2be2e490346aaf1e6e2bdedeb964943e63d69910dbdc49f232182b9095f1b29" Jan 26 13:11:54 crc kubenswrapper[4881]: I0126 13:11:54.922699 4881 scope.go:117] "RemoveContainer" containerID="d27fd28e1895233826fd4ccf94574a6e5ae357e98dd77d9c8f6d1d9c896f4791" Jan 26 13:12:00 crc kubenswrapper[4881]: I0126 13:12:00.091675 4881 generic.go:334] "Generic (PLEG): container finished" podID="d828b00c-a7ee-47c8-b98a-2529ccf16cc6" containerID="0ae7f6ea6913897e76e0c8c335e1a3c392d6d52c7cb2f678d9e6a3adb7435404" exitCode=0 Jan 26 13:12:00 crc kubenswrapper[4881]: I0126 13:12:00.104620 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" event={"ID":"d828b00c-a7ee-47c8-b98a-2529ccf16cc6","Type":"ContainerDied","Data":"0ae7f6ea6913897e76e0c8c335e1a3c392d6d52c7cb2f678d9e6a3adb7435404"} Jan 26 13:12:01 crc kubenswrapper[4881]: I0126 13:12:01.599765 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:12:01 crc kubenswrapper[4881]: I0126 13:12:01.708592 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-ssh-key-openstack-edpm-ipam\") pod \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " Jan 26 13:12:01 crc kubenswrapper[4881]: I0126 13:12:01.708679 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-inventory\") pod \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " Jan 26 13:12:01 crc kubenswrapper[4881]: I0126 13:12:01.709566 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r2rs\" (UniqueName: \"kubernetes.io/projected/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-kube-api-access-4r2rs\") pod \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\" (UID: \"d828b00c-a7ee-47c8-b98a-2529ccf16cc6\") " Jan 26 13:12:01 crc kubenswrapper[4881]: I0126 13:12:01.714909 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-kube-api-access-4r2rs" (OuterVolumeSpecName: "kube-api-access-4r2rs") pod "d828b00c-a7ee-47c8-b98a-2529ccf16cc6" (UID: "d828b00c-a7ee-47c8-b98a-2529ccf16cc6"). InnerVolumeSpecName "kube-api-access-4r2rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:12:01 crc kubenswrapper[4881]: I0126 13:12:01.742281 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d828b00c-a7ee-47c8-b98a-2529ccf16cc6" (UID: "d828b00c-a7ee-47c8-b98a-2529ccf16cc6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:12:01 crc kubenswrapper[4881]: I0126 13:12:01.744297 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-inventory" (OuterVolumeSpecName: "inventory") pod "d828b00c-a7ee-47c8-b98a-2529ccf16cc6" (UID: "d828b00c-a7ee-47c8-b98a-2529ccf16cc6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:12:01 crc kubenswrapper[4881]: I0126 13:12:01.812100 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:01 crc kubenswrapper[4881]: I0126 13:12:01.812125 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:01 crc kubenswrapper[4881]: I0126 13:12:01.812135 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r2rs\" (UniqueName: \"kubernetes.io/projected/d828b00c-a7ee-47c8-b98a-2529ccf16cc6-kube-api-access-4r2rs\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.123865 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" event={"ID":"d828b00c-a7ee-47c8-b98a-2529ccf16cc6","Type":"ContainerDied","Data":"ea9da090c81f082b78d13e123cb7caaaaee55b4b7b3e6db160f18f68e4dae41c"} Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.123909 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea9da090c81f082b78d13e123cb7caaaaee55b4b7b3e6db160f18f68e4dae41c" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.123987 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.229890 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh"] Jan 26 13:12:02 crc kubenswrapper[4881]: E0126 13:12:02.230640 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerName="extract-utilities" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.230660 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerName="extract-utilities" Jan 26 13:12:02 crc kubenswrapper[4881]: E0126 13:12:02.230674 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d828b00c-a7ee-47c8-b98a-2529ccf16cc6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.230681 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d828b00c-a7ee-47c8-b98a-2529ccf16cc6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:12:02 crc kubenswrapper[4881]: E0126 13:12:02.230707 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerName="registry-server" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.230713 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerName="registry-server" Jan 26 13:12:02 crc kubenswrapper[4881]: E0126 13:12:02.230724 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerName="extract-content" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.230730 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerName="extract-content" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.230918 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d828b00c-a7ee-47c8-b98a-2529ccf16cc6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.230933 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c4ab87-65d4-4bf3-9264-ae26135e48cf" containerName="registry-server" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.231563 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.235876 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.236141 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.236536 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.238806 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.248800 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh"] Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.325415 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.325490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2thw\" (UniqueName: \"kubernetes.io/projected/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-kube-api-access-s2thw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.325578 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.427660 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2thw\" (UniqueName: \"kubernetes.io/projected/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-kube-api-access-s2thw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.427788 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.427935 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.433025 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.434648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.461348 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2thw\" (UniqueName: \"kubernetes.io/projected/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-kube-api-access-s2thw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:02 crc kubenswrapper[4881]: I0126 13:12:02.551863 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:03 crc kubenswrapper[4881]: I0126 13:12:03.111391 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh"] Jan 26 13:12:03 crc kubenswrapper[4881]: I0126 13:12:03.133969 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" event={"ID":"f8d3d257-2cd9-42b9-aeb9-462b635c53dc","Type":"ContainerStarted","Data":"37b4cbceed0c72b83d3ffb59a8d208350d4d2ed3fbd3a68da312c3229199df1a"} Jan 26 13:12:04 crc kubenswrapper[4881]: I0126 13:12:04.146884 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" event={"ID":"f8d3d257-2cd9-42b9-aeb9-462b635c53dc","Type":"ContainerStarted","Data":"ae541cef15cd64921dd35f482e6412296652cae3180a215c1fe33036d290014a"} Jan 26 13:12:04 crc kubenswrapper[4881]: I0126 13:12:04.180081 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" podStartSLOduration=1.6891994989999999 podStartE2EDuration="2.180059185s" podCreationTimestamp="2026-01-26 13:12:02 +0000 UTC" firstStartedPulling="2026-01-26 13:12:03.12109619 +0000 UTC m=+2195.600406236" lastFinishedPulling="2026-01-26 13:12:03.611955896 +0000 UTC m=+2196.091265922" observedRunningTime="2026-01-26 13:12:04.169175491 +0000 UTC m=+2196.648485527" watchObservedRunningTime="2026-01-26 13:12:04.180059185 +0000 UTC m=+2196.659369211" Jan 26 13:12:09 crc kubenswrapper[4881]: I0126 13:12:09.204056 4881 generic.go:334] "Generic (PLEG): container finished" podID="f8d3d257-2cd9-42b9-aeb9-462b635c53dc" containerID="ae541cef15cd64921dd35f482e6412296652cae3180a215c1fe33036d290014a" exitCode=0 Jan 26 13:12:09 crc kubenswrapper[4881]: I0126 13:12:09.204110 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" event={"ID":"f8d3d257-2cd9-42b9-aeb9-462b635c53dc","Type":"ContainerDied","Data":"ae541cef15cd64921dd35f482e6412296652cae3180a215c1fe33036d290014a"} Jan 26 13:12:10 crc kubenswrapper[4881]: I0126 13:12:10.645068 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:10 crc kubenswrapper[4881]: I0126 13:12:10.822090 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2thw\" (UniqueName: \"kubernetes.io/projected/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-kube-api-access-s2thw\") pod \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " Jan 26 13:12:10 crc kubenswrapper[4881]: I0126 13:12:10.822678 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-ssh-key-openstack-edpm-ipam\") pod \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " Jan 26 13:12:10 crc kubenswrapper[4881]: I0126 13:12:10.822894 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-inventory\") pod \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\" (UID: \"f8d3d257-2cd9-42b9-aeb9-462b635c53dc\") " Jan 26 13:12:10 crc kubenswrapper[4881]: I0126 13:12:10.828760 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-kube-api-access-s2thw" (OuterVolumeSpecName: "kube-api-access-s2thw") pod "f8d3d257-2cd9-42b9-aeb9-462b635c53dc" (UID: "f8d3d257-2cd9-42b9-aeb9-462b635c53dc"). InnerVolumeSpecName "kube-api-access-s2thw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:12:10 crc kubenswrapper[4881]: I0126 13:12:10.851940 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8d3d257-2cd9-42b9-aeb9-462b635c53dc" (UID: "f8d3d257-2cd9-42b9-aeb9-462b635c53dc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:12:10 crc kubenswrapper[4881]: I0126 13:12:10.855547 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-inventory" (OuterVolumeSpecName: "inventory") pod "f8d3d257-2cd9-42b9-aeb9-462b635c53dc" (UID: "f8d3d257-2cd9-42b9-aeb9-462b635c53dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:12:10 crc kubenswrapper[4881]: I0126 13:12:10.925328 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2thw\" (UniqueName: \"kubernetes.io/projected/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-kube-api-access-s2thw\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:10 crc kubenswrapper[4881]: I0126 13:12:10.925620 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:10 crc kubenswrapper[4881]: I0126 13:12:10.925687 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d3d257-2cd9-42b9-aeb9-462b635c53dc-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.260775 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" event={"ID":"f8d3d257-2cd9-42b9-aeb9-462b635c53dc","Type":"ContainerDied","Data":"37b4cbceed0c72b83d3ffb59a8d208350d4d2ed3fbd3a68da312c3229199df1a"} Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.260860 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b4cbceed0c72b83d3ffb59a8d208350d4d2ed3fbd3a68da312c3229199df1a" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.260982 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.344185 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj"] Jan 26 13:12:11 crc kubenswrapper[4881]: E0126 13:12:11.344714 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d3d257-2cd9-42b9-aeb9-462b635c53dc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.344737 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d3d257-2cd9-42b9-aeb9-462b635c53dc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.345009 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d3d257-2cd9-42b9-aeb9-462b635c53dc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.345904 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.353913 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj"] Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.384307 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.384740 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.384800 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.385789 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.445284 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpq6c\" (UniqueName: \"kubernetes.io/projected/ceaad5ff-3f46-431b-817c-669e0f038898-kube-api-access-zpq6c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lbcjj\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.445431 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lbcjj\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.445460 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lbcjj\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.547354 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lbcjj\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.547603 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpq6c\" (UniqueName: \"kubernetes.io/projected/ceaad5ff-3f46-431b-817c-669e0f038898-kube-api-access-zpq6c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lbcjj\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.547702 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lbcjj\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.552039 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lbcjj\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.552759 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lbcjj\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.568821 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpq6c\" (UniqueName: \"kubernetes.io/projected/ceaad5ff-3f46-431b-817c-669e0f038898-kube-api-access-zpq6c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lbcjj\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:11 crc kubenswrapper[4881]: I0126 13:12:11.710787 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:12 crc kubenswrapper[4881]: I0126 13:12:12.336942 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj"] Jan 26 13:12:13 crc kubenswrapper[4881]: I0126 13:12:13.051588 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-nzdjt"] Jan 26 13:12:13 crc kubenswrapper[4881]: I0126 13:12:13.059820 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-nzdjt"] Jan 26 13:12:13 crc kubenswrapper[4881]: I0126 13:12:13.279839 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" event={"ID":"ceaad5ff-3f46-431b-817c-669e0f038898","Type":"ContainerStarted","Data":"828b51f722a6e19a4d41ac56bee599c7ab83330dfbf83729f79c0544113efe01"} Jan 26 13:12:14 crc kubenswrapper[4881]: I0126 13:12:14.095138 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36c2d906-18af-4025-ac9d-b142b34586f3" path="/var/lib/kubelet/pods/36c2d906-18af-4025-ac9d-b142b34586f3/volumes" Jan 26 13:12:14 crc kubenswrapper[4881]: I0126 13:12:14.290764 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" event={"ID":"ceaad5ff-3f46-431b-817c-669e0f038898","Type":"ContainerStarted","Data":"f000aa176960624681fe03be383722a4c6019ca2bd322c4bc9cc85069c2a021c"} Jan 26 13:12:14 crc kubenswrapper[4881]: I0126 13:12:14.316911 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" podStartSLOduration=2.564243752 podStartE2EDuration="3.316893508s" podCreationTimestamp="2026-01-26 13:12:11 +0000 UTC" firstStartedPulling="2026-01-26 13:12:12.339813075 +0000 UTC m=+2204.819123101" lastFinishedPulling="2026-01-26 13:12:13.092462831 +0000 UTC m=+2205.571772857" observedRunningTime="2026-01-26 13:12:14.308775481 +0000 UTC m=+2206.788085597" watchObservedRunningTime="2026-01-26 13:12:14.316893508 +0000 UTC m=+2206.796203534" Jan 26 13:12:24 crc kubenswrapper[4881]: I0126 13:12:24.789201 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:12:24 crc kubenswrapper[4881]: I0126 13:12:24.790086 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:12:24 crc kubenswrapper[4881]: I0126 13:12:24.790169 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 13:12:24 crc kubenswrapper[4881]: I0126 13:12:24.791258 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7750edf22d4cbb66ace9f47e5c6d27c40449613083712f392ec90e1aada14c4a"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:12:24 crc kubenswrapper[4881]: I0126 13:12:24.791401 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://7750edf22d4cbb66ace9f47e5c6d27c40449613083712f392ec90e1aada14c4a" gracePeriod=600 Jan 26 13:12:25 crc kubenswrapper[4881]: I0126 13:12:25.410314 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="7750edf22d4cbb66ace9f47e5c6d27c40449613083712f392ec90e1aada14c4a" exitCode=0 Jan 26 13:12:25 crc kubenswrapper[4881]: I0126 13:12:25.410323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"7750edf22d4cbb66ace9f47e5c6d27c40449613083712f392ec90e1aada14c4a"} Jan 26 13:12:25 crc kubenswrapper[4881]: I0126 13:12:25.410682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c"} Jan 26 13:12:25 crc kubenswrapper[4881]: I0126 13:12:25.410722 4881 scope.go:117] "RemoveContainer" containerID="a161c3072a6f7e76b03b9d4b408880fa8105ae8ab9bede27b8975663150557e3" Jan 26 13:12:37 crc kubenswrapper[4881]: I0126 13:12:37.050623 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bhqzs"] Jan 26 13:12:37 crc kubenswrapper[4881]: I0126 13:12:37.066354 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bhqzs"] Jan 26 13:12:38 crc kubenswrapper[4881]: I0126 13:12:38.099286 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b4140b-bd1d-4295-961e-99d18c4406c3" path="/var/lib/kubelet/pods/07b4140b-bd1d-4295-961e-99d18c4406c3/volumes" Jan 26 13:12:55 crc kubenswrapper[4881]: I0126 13:12:55.079551 4881 scope.go:117] "RemoveContainer" containerID="38f68b0a65c4307752767c55853fb1a2715f331edc183bd70b07536a1159e5b4" Jan 26 13:12:55 crc kubenswrapper[4881]: I0126 13:12:55.143248 4881 scope.go:117] "RemoveContainer" containerID="094fe90e94e3e35810d16d888a4028224036b85f76f85e1088a1f49e9407b9b4" Jan 26 13:12:57 crc kubenswrapper[4881]: I0126 13:12:57.751314 4881 generic.go:334] "Generic (PLEG): container finished" podID="ceaad5ff-3f46-431b-817c-669e0f038898" containerID="f000aa176960624681fe03be383722a4c6019ca2bd322c4bc9cc85069c2a021c" exitCode=0 Jan 26 13:12:57 crc kubenswrapper[4881]: I0126 13:12:57.751441 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" event={"ID":"ceaad5ff-3f46-431b-817c-669e0f038898","Type":"ContainerDied","Data":"f000aa176960624681fe03be383722a4c6019ca2bd322c4bc9cc85069c2a021c"} Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.059356 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-m65pm"] Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.069778 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-m65pm"] Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.183249 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.212109 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-inventory\") pod \"ceaad5ff-3f46-431b-817c-669e0f038898\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.212647 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-ssh-key-openstack-edpm-ipam\") pod \"ceaad5ff-3f46-431b-817c-669e0f038898\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.212878 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpq6c\" (UniqueName: \"kubernetes.io/projected/ceaad5ff-3f46-431b-817c-669e0f038898-kube-api-access-zpq6c\") pod \"ceaad5ff-3f46-431b-817c-669e0f038898\" (UID: \"ceaad5ff-3f46-431b-817c-669e0f038898\") " Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.219740 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceaad5ff-3f46-431b-817c-669e0f038898-kube-api-access-zpq6c" (OuterVolumeSpecName: "kube-api-access-zpq6c") pod "ceaad5ff-3f46-431b-817c-669e0f038898" (UID: "ceaad5ff-3f46-431b-817c-669e0f038898"). InnerVolumeSpecName "kube-api-access-zpq6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.248622 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-inventory" (OuterVolumeSpecName: "inventory") pod "ceaad5ff-3f46-431b-817c-669e0f038898" (UID: "ceaad5ff-3f46-431b-817c-669e0f038898"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.254819 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ceaad5ff-3f46-431b-817c-669e0f038898" (UID: "ceaad5ff-3f46-431b-817c-669e0f038898"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.315404 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.315719 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ceaad5ff-3f46-431b-817c-669e0f038898-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.315731 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpq6c\" (UniqueName: \"kubernetes.io/projected/ceaad5ff-3f46-431b-817c-669e0f038898-kube-api-access-zpq6c\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.774919 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.774914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lbcjj" event={"ID":"ceaad5ff-3f46-431b-817c-669e0f038898","Type":"ContainerDied","Data":"828b51f722a6e19a4d41ac56bee599c7ab83330dfbf83729f79c0544113efe01"} Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.775061 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="828b51f722a6e19a4d41ac56bee599c7ab83330dfbf83729f79c0544113efe01" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.881563 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57"] Jan 26 13:12:59 crc kubenswrapper[4881]: E0126 13:12:59.882073 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceaad5ff-3f46-431b-817c-669e0f038898" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.882095 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceaad5ff-3f46-431b-817c-669e0f038898" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.882276 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceaad5ff-3f46-431b-817c-669e0f038898" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.882999 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.884687 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.886095 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.886348 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.886476 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.903132 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57"] Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.932364 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8v57\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.932476 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8v57\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:12:59 crc kubenswrapper[4881]: I0126 13:12:59.932614 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2kc\" (UniqueName: \"kubernetes.io/projected/12cb0be5-3cea-4264-9b33-42194fe4991c-kube-api-access-qs2kc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8v57\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:13:00 crc kubenswrapper[4881]: I0126 13:13:00.034043 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2kc\" (UniqueName: \"kubernetes.io/projected/12cb0be5-3cea-4264-9b33-42194fe4991c-kube-api-access-qs2kc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8v57\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:13:00 crc kubenswrapper[4881]: I0126 13:13:00.034116 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8v57\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:13:00 crc kubenswrapper[4881]: I0126 13:13:00.034177 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8v57\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:13:00 crc kubenswrapper[4881]: I0126 13:13:00.039161 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8v57\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:13:00 crc kubenswrapper[4881]: I0126 13:13:00.040003 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8v57\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:13:00 crc kubenswrapper[4881]: I0126 13:13:00.059950 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2kc\" (UniqueName: \"kubernetes.io/projected/12cb0be5-3cea-4264-9b33-42194fe4991c-kube-api-access-qs2kc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8v57\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:13:00 crc kubenswrapper[4881]: I0126 13:13:00.093999 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b77227-f137-4a0e-84bc-383c0facf6b9" path="/var/lib/kubelet/pods/e7b77227-f137-4a0e-84bc-383c0facf6b9/volumes" Jan 26 13:13:00 crc kubenswrapper[4881]: I0126 13:13:00.215688 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:13:00 crc kubenswrapper[4881]: W0126 13:13:00.808987 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12cb0be5_3cea_4264_9b33_42194fe4991c.slice/crio-c459faa3acfee59d1229070d6338113eed5ad02a244478fcf00219ca5b999cf5 WatchSource:0}: Error finding container c459faa3acfee59d1229070d6338113eed5ad02a244478fcf00219ca5b999cf5: Status 404 returned error can't find the container with id c459faa3acfee59d1229070d6338113eed5ad02a244478fcf00219ca5b999cf5 Jan 26 13:13:00 crc kubenswrapper[4881]: I0126 13:13:00.809743 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57"] Jan 26 13:13:01 crc kubenswrapper[4881]: I0126 13:13:01.807501 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" event={"ID":"12cb0be5-3cea-4264-9b33-42194fe4991c","Type":"ContainerStarted","Data":"0a8d5c022bc581b796a1e8d943db7d1d0304f439838398c5b7fcdeb583563a65"} Jan 26 13:13:01 crc kubenswrapper[4881]: I0126 13:13:01.807822 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" event={"ID":"12cb0be5-3cea-4264-9b33-42194fe4991c","Type":"ContainerStarted","Data":"c459faa3acfee59d1229070d6338113eed5ad02a244478fcf00219ca5b999cf5"} Jan 26 13:13:55 crc kubenswrapper[4881]: I0126 13:13:55.261838 4881 scope.go:117] "RemoveContainer" containerID="62d6b06cdc7085c02cfb8f7661419cca91af57b207f114cdb79d2ad156a5ed39" Jan 26 13:14:00 crc kubenswrapper[4881]: I0126 13:14:00.412810 4881 generic.go:334] "Generic (PLEG): container finished" podID="12cb0be5-3cea-4264-9b33-42194fe4991c" containerID="0a8d5c022bc581b796a1e8d943db7d1d0304f439838398c5b7fcdeb583563a65" exitCode=0 Jan 26 13:14:00 crc kubenswrapper[4881]: I0126 13:14:00.412955 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" event={"ID":"12cb0be5-3cea-4264-9b33-42194fe4991c","Type":"ContainerDied","Data":"0a8d5c022bc581b796a1e8d943db7d1d0304f439838398c5b7fcdeb583563a65"} Jan 26 13:14:01 crc kubenswrapper[4881]: I0126 13:14:01.847594 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:14:01 crc kubenswrapper[4881]: I0126 13:14:01.852357 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs2kc\" (UniqueName: \"kubernetes.io/projected/12cb0be5-3cea-4264-9b33-42194fe4991c-kube-api-access-qs2kc\") pod \"12cb0be5-3cea-4264-9b33-42194fe4991c\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " Jan 26 13:14:01 crc kubenswrapper[4881]: I0126 13:14:01.852399 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-ssh-key-openstack-edpm-ipam\") pod \"12cb0be5-3cea-4264-9b33-42194fe4991c\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " Jan 26 13:14:01 crc kubenswrapper[4881]: I0126 13:14:01.852496 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-inventory\") pod \"12cb0be5-3cea-4264-9b33-42194fe4991c\" (UID: \"12cb0be5-3cea-4264-9b33-42194fe4991c\") " Jan 26 13:14:01 crc kubenswrapper[4881]: I0126 13:14:01.861115 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12cb0be5-3cea-4264-9b33-42194fe4991c-kube-api-access-qs2kc" (OuterVolumeSpecName: "kube-api-access-qs2kc") pod "12cb0be5-3cea-4264-9b33-42194fe4991c" (UID: "12cb0be5-3cea-4264-9b33-42194fe4991c"). InnerVolumeSpecName "kube-api-access-qs2kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:14:01 crc kubenswrapper[4881]: I0126 13:14:01.903449 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "12cb0be5-3cea-4264-9b33-42194fe4991c" (UID: "12cb0be5-3cea-4264-9b33-42194fe4991c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:14:01 crc kubenswrapper[4881]: I0126 13:14:01.909306 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-inventory" (OuterVolumeSpecName: "inventory") pod "12cb0be5-3cea-4264-9b33-42194fe4991c" (UID: "12cb0be5-3cea-4264-9b33-42194fe4991c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:14:01 crc kubenswrapper[4881]: I0126 13:14:01.954788 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:01 crc kubenswrapper[4881]: I0126 13:14:01.954823 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12cb0be5-3cea-4264-9b33-42194fe4991c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:01 crc kubenswrapper[4881]: I0126 13:14:01.954835 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs2kc\" (UniqueName: \"kubernetes.io/projected/12cb0be5-3cea-4264-9b33-42194fe4991c-kube-api-access-qs2kc\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.434972 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" event={"ID":"12cb0be5-3cea-4264-9b33-42194fe4991c","Type":"ContainerDied","Data":"c459faa3acfee59d1229070d6338113eed5ad02a244478fcf00219ca5b999cf5"} Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.435261 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c459faa3acfee59d1229070d6338113eed5ad02a244478fcf00219ca5b999cf5" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.435050 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8v57" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.528935 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n5zvt"] Jan 26 13:14:02 crc kubenswrapper[4881]: E0126 13:14:02.529445 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cb0be5-3cea-4264-9b33-42194fe4991c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.529470 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cb0be5-3cea-4264-9b33-42194fe4991c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.529785 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="12cb0be5-3cea-4264-9b33-42194fe4991c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.530623 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.536128 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.540352 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.540442 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.540592 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.546737 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n5zvt"] Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.670637 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n5zvt\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.670862 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n5zvt\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.670906 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbjrg\" (UniqueName: \"kubernetes.io/projected/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-kube-api-access-hbjrg\") pod \"ssh-known-hosts-edpm-deployment-n5zvt\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.772761 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n5zvt\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.773076 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n5zvt\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.773136 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbjrg\" (UniqueName: \"kubernetes.io/projected/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-kube-api-access-hbjrg\") pod \"ssh-known-hosts-edpm-deployment-n5zvt\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.777620 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n5zvt\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.777642 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n5zvt\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.794730 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbjrg\" (UniqueName: \"kubernetes.io/projected/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-kube-api-access-hbjrg\") pod \"ssh-known-hosts-edpm-deployment-n5zvt\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:02 crc kubenswrapper[4881]: I0126 13:14:02.858787 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:03 crc kubenswrapper[4881]: I0126 13:14:03.211635 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n5zvt"] Jan 26 13:14:03 crc kubenswrapper[4881]: I0126 13:14:03.215488 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:14:03 crc kubenswrapper[4881]: I0126 13:14:03.448477 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" event={"ID":"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a","Type":"ContainerStarted","Data":"71a5075bc32d000003c6a975b1dba7a70dea557c85f15dc2320f5a712098438e"} Jan 26 13:14:05 crc kubenswrapper[4881]: I0126 13:14:05.474849 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" event={"ID":"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a","Type":"ContainerStarted","Data":"8c53c2b9c3c214361f3f91a5e1cdc2c77ae2d70afcb166b5bf4f8594fdb56a18"} Jan 26 13:14:05 crc kubenswrapper[4881]: I0126 13:14:05.506020 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" podStartSLOduration=2.1324884219999998 podStartE2EDuration="3.505995356s" podCreationTimestamp="2026-01-26 13:14:02 +0000 UTC" firstStartedPulling="2026-01-26 13:14:03.215301097 +0000 UTC m=+2315.694611123" lastFinishedPulling="2026-01-26 13:14:04.588808021 +0000 UTC m=+2317.068118057" observedRunningTime="2026-01-26 13:14:05.49626904 +0000 UTC m=+2317.975579096" watchObservedRunningTime="2026-01-26 13:14:05.505995356 +0000 UTC m=+2317.985305412" Jan 26 13:14:12 crc kubenswrapper[4881]: I0126 13:14:12.552696 4881 generic.go:334] "Generic (PLEG): container finished" podID="39dc7ed5-dafd-4ef4-94a7-509fc1568f5a" containerID="8c53c2b9c3c214361f3f91a5e1cdc2c77ae2d70afcb166b5bf4f8594fdb56a18" exitCode=0 Jan 26 13:14:12 crc kubenswrapper[4881]: I0126 13:14:12.553256 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" event={"ID":"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a","Type":"ContainerDied","Data":"8c53c2b9c3c214361f3f91a5e1cdc2c77ae2d70afcb166b5bf4f8594fdb56a18"} Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.021640 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.047783 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-inventory-0\") pod \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.047852 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbjrg\" (UniqueName: \"kubernetes.io/projected/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-kube-api-access-hbjrg\") pod \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.047892 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-ssh-key-openstack-edpm-ipam\") pod \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\" (UID: \"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a\") " Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.056269 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-kube-api-access-hbjrg" (OuterVolumeSpecName: "kube-api-access-hbjrg") pod "39dc7ed5-dafd-4ef4-94a7-509fc1568f5a" (UID: "39dc7ed5-dafd-4ef4-94a7-509fc1568f5a"). InnerVolumeSpecName "kube-api-access-hbjrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.082249 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "39dc7ed5-dafd-4ef4-94a7-509fc1568f5a" (UID: "39dc7ed5-dafd-4ef4-94a7-509fc1568f5a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.091607 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "39dc7ed5-dafd-4ef4-94a7-509fc1568f5a" (UID: "39dc7ed5-dafd-4ef4-94a7-509fc1568f5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.151486 4881 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.151532 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbjrg\" (UniqueName: \"kubernetes.io/projected/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-kube-api-access-hbjrg\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.151548 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39dc7ed5-dafd-4ef4-94a7-509fc1568f5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.579719 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" event={"ID":"39dc7ed5-dafd-4ef4-94a7-509fc1568f5a","Type":"ContainerDied","Data":"71a5075bc32d000003c6a975b1dba7a70dea557c85f15dc2320f5a712098438e"} Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.579765 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a5075bc32d000003c6a975b1dba7a70dea557c85f15dc2320f5a712098438e" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.579811 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n5zvt" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.688864 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff"] Jan 26 13:14:14 crc kubenswrapper[4881]: E0126 13:14:14.689566 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39dc7ed5-dafd-4ef4-94a7-509fc1568f5a" containerName="ssh-known-hosts-edpm-deployment" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.689597 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="39dc7ed5-dafd-4ef4-94a7-509fc1568f5a" containerName="ssh-known-hosts-edpm-deployment" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.689987 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="39dc7ed5-dafd-4ef4-94a7-509fc1568f5a" containerName="ssh-known-hosts-edpm-deployment" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.691091 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.693787 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.693993 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.694031 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.694534 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.700147 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff"] Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.762042 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-54xff\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.762311 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-54xff\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.762558 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwmjw\" (UniqueName: \"kubernetes.io/projected/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-kube-api-access-rwmjw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-54xff\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.863398 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-54xff\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.863479 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-54xff\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.863611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwmjw\" (UniqueName: \"kubernetes.io/projected/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-kube-api-access-rwmjw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-54xff\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.871399 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-54xff\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.879662 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-54xff\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:14 crc kubenswrapper[4881]: I0126 13:14:14.882119 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwmjw\" (UniqueName: \"kubernetes.io/projected/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-kube-api-access-rwmjw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-54xff\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:15 crc kubenswrapper[4881]: I0126 13:14:15.021245 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:15 crc kubenswrapper[4881]: I0126 13:14:15.630838 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff"] Jan 26 13:14:16 crc kubenswrapper[4881]: I0126 13:14:16.605224 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" event={"ID":"81f1ca9f-e8b5-477d-b628-8a60848a7fe2","Type":"ContainerStarted","Data":"9777f391d4543bf31d267dabb44fa9784497faa7c105d9c01f50527b1df314f4"} Jan 26 13:14:16 crc kubenswrapper[4881]: I0126 13:14:16.605600 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" event={"ID":"81f1ca9f-e8b5-477d-b628-8a60848a7fe2","Type":"ContainerStarted","Data":"057c4e86bc9452206882a74431c5a95dc9a5f9dd6f5c494374fa4ce1953f7e6d"} Jan 26 13:14:16 crc kubenswrapper[4881]: I0126 13:14:16.635825 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" podStartSLOduration=2.198204628 podStartE2EDuration="2.635800532s" podCreationTimestamp="2026-01-26 13:14:14 +0000 UTC" firstStartedPulling="2026-01-26 13:14:15.63629847 +0000 UTC m=+2328.115608496" lastFinishedPulling="2026-01-26 13:14:16.073894374 +0000 UTC m=+2328.553204400" observedRunningTime="2026-01-26 13:14:16.627705036 +0000 UTC m=+2329.107015062" watchObservedRunningTime="2026-01-26 13:14:16.635800532 +0000 UTC m=+2329.115110568" Jan 26 13:14:26 crc kubenswrapper[4881]: I0126 13:14:26.720896 4881 generic.go:334] "Generic (PLEG): container finished" podID="81f1ca9f-e8b5-477d-b628-8a60848a7fe2" containerID="9777f391d4543bf31d267dabb44fa9784497faa7c105d9c01f50527b1df314f4" exitCode=0 Jan 26 13:14:26 crc kubenswrapper[4881]: I0126 13:14:26.720976 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" event={"ID":"81f1ca9f-e8b5-477d-b628-8a60848a7fe2","Type":"ContainerDied","Data":"9777f391d4543bf31d267dabb44fa9784497faa7c105d9c01f50527b1df314f4"} Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.209050 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.345904 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-inventory\") pod \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.346078 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwmjw\" (UniqueName: \"kubernetes.io/projected/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-kube-api-access-rwmjw\") pod \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.346189 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-ssh-key-openstack-edpm-ipam\") pod \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\" (UID: \"81f1ca9f-e8b5-477d-b628-8a60848a7fe2\") " Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.351801 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-kube-api-access-rwmjw" (OuterVolumeSpecName: "kube-api-access-rwmjw") pod "81f1ca9f-e8b5-477d-b628-8a60848a7fe2" (UID: "81f1ca9f-e8b5-477d-b628-8a60848a7fe2"). InnerVolumeSpecName "kube-api-access-rwmjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.371646 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-inventory" (OuterVolumeSpecName: "inventory") pod "81f1ca9f-e8b5-477d-b628-8a60848a7fe2" (UID: "81f1ca9f-e8b5-477d-b628-8a60848a7fe2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.391838 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "81f1ca9f-e8b5-477d-b628-8a60848a7fe2" (UID: "81f1ca9f-e8b5-477d-b628-8a60848a7fe2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.453543 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwmjw\" (UniqueName: \"kubernetes.io/projected/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-kube-api-access-rwmjw\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.453585 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.453595 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81f1ca9f-e8b5-477d-b628-8a60848a7fe2-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.741687 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" event={"ID":"81f1ca9f-e8b5-477d-b628-8a60848a7fe2","Type":"ContainerDied","Data":"057c4e86bc9452206882a74431c5a95dc9a5f9dd6f5c494374fa4ce1953f7e6d"} Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.742043 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057c4e86bc9452206882a74431c5a95dc9a5f9dd6f5c494374fa4ce1953f7e6d" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.741778 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-54xff" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.848463 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw"] Jan 26 13:14:28 crc kubenswrapper[4881]: E0126 13:14:28.849021 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f1ca9f-e8b5-477d-b628-8a60848a7fe2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.849049 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f1ca9f-e8b5-477d-b628-8a60848a7fe2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.849362 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f1ca9f-e8b5-477d-b628-8a60848a7fe2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.850194 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.852385 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.852456 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.852651 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.852811 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.857739 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw"] Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.963926 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.963988 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcgdn\" (UniqueName: \"kubernetes.io/projected/760f5f9c-04ad-4788-886d-8f301f2f487b-kube-api-access-rcgdn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:28 crc kubenswrapper[4881]: I0126 13:14:28.964023 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:29 crc kubenswrapper[4881]: I0126 13:14:29.066240 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:29 crc kubenswrapper[4881]: I0126 13:14:29.066297 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcgdn\" (UniqueName: \"kubernetes.io/projected/760f5f9c-04ad-4788-886d-8f301f2f487b-kube-api-access-rcgdn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:29 crc kubenswrapper[4881]: I0126 13:14:29.066325 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:29 crc kubenswrapper[4881]: I0126 13:14:29.070595 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:29 crc kubenswrapper[4881]: I0126 13:14:29.070804 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:29 crc kubenswrapper[4881]: I0126 13:14:29.082115 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcgdn\" (UniqueName: \"kubernetes.io/projected/760f5f9c-04ad-4788-886d-8f301f2f487b-kube-api-access-rcgdn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:29 crc kubenswrapper[4881]: I0126 13:14:29.178065 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:29 crc kubenswrapper[4881]: W0126 13:14:29.755395 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod760f5f9c_04ad_4788_886d_8f301f2f487b.slice/crio-7369bbf6ffb92ed6df50f217fd356fcf8f8e58fc55a4ad1ee0c13834e0caa4f5 WatchSource:0}: Error finding container 7369bbf6ffb92ed6df50f217fd356fcf8f8e58fc55a4ad1ee0c13834e0caa4f5: Status 404 returned error can't find the container with id 7369bbf6ffb92ed6df50f217fd356fcf8f8e58fc55a4ad1ee0c13834e0caa4f5 Jan 26 13:14:29 crc kubenswrapper[4881]: I0126 13:14:29.758500 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw"] Jan 26 13:14:30 crc kubenswrapper[4881]: I0126 13:14:30.770467 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" event={"ID":"760f5f9c-04ad-4788-886d-8f301f2f487b","Type":"ContainerStarted","Data":"1e84c3ef45a6aa2b6b07477e1b06a20ef47ec33420b17fbc674c21569756a649"} Jan 26 13:14:30 crc kubenswrapper[4881]: I0126 13:14:30.770797 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" event={"ID":"760f5f9c-04ad-4788-886d-8f301f2f487b","Type":"ContainerStarted","Data":"7369bbf6ffb92ed6df50f217fd356fcf8f8e58fc55a4ad1ee0c13834e0caa4f5"} Jan 26 13:14:30 crc kubenswrapper[4881]: I0126 13:14:30.800562 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" podStartSLOduration=2.352890784 podStartE2EDuration="2.800499759s" podCreationTimestamp="2026-01-26 13:14:28 +0000 UTC" firstStartedPulling="2026-01-26 13:14:29.761616822 +0000 UTC m=+2342.240926878" lastFinishedPulling="2026-01-26 13:14:30.209225807 +0000 UTC m=+2342.688535853" observedRunningTime="2026-01-26 13:14:30.8001371 +0000 UTC m=+2343.279447166" watchObservedRunningTime="2026-01-26 13:14:30.800499759 +0000 UTC m=+2343.279809825" Jan 26 13:14:40 crc kubenswrapper[4881]: I0126 13:14:40.887214 4881 generic.go:334] "Generic (PLEG): container finished" podID="760f5f9c-04ad-4788-886d-8f301f2f487b" containerID="1e84c3ef45a6aa2b6b07477e1b06a20ef47ec33420b17fbc674c21569756a649" exitCode=0 Jan 26 13:14:40 crc kubenswrapper[4881]: I0126 13:14:40.887355 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" event={"ID":"760f5f9c-04ad-4788-886d-8f301f2f487b","Type":"ContainerDied","Data":"1e84c3ef45a6aa2b6b07477e1b06a20ef47ec33420b17fbc674c21569756a649"} Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.391180 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.471129 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcgdn\" (UniqueName: \"kubernetes.io/projected/760f5f9c-04ad-4788-886d-8f301f2f487b-kube-api-access-rcgdn\") pod \"760f5f9c-04ad-4788-886d-8f301f2f487b\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.471371 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-ssh-key-openstack-edpm-ipam\") pod \"760f5f9c-04ad-4788-886d-8f301f2f487b\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.472416 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-inventory\") pod \"760f5f9c-04ad-4788-886d-8f301f2f487b\" (UID: \"760f5f9c-04ad-4788-886d-8f301f2f487b\") " Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.482747 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760f5f9c-04ad-4788-886d-8f301f2f487b-kube-api-access-rcgdn" (OuterVolumeSpecName: "kube-api-access-rcgdn") pod "760f5f9c-04ad-4788-886d-8f301f2f487b" (UID: "760f5f9c-04ad-4788-886d-8f301f2f487b"). InnerVolumeSpecName "kube-api-access-rcgdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.523311 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-inventory" (OuterVolumeSpecName: "inventory") pod "760f5f9c-04ad-4788-886d-8f301f2f487b" (UID: "760f5f9c-04ad-4788-886d-8f301f2f487b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.523884 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "760f5f9c-04ad-4788-886d-8f301f2f487b" (UID: "760f5f9c-04ad-4788-886d-8f301f2f487b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.575195 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.575230 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/760f5f9c-04ad-4788-886d-8f301f2f487b-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.575241 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcgdn\" (UniqueName: \"kubernetes.io/projected/760f5f9c-04ad-4788-886d-8f301f2f487b-kube-api-access-rcgdn\") on node \"crc\" DevicePath \"\"" Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.916918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" event={"ID":"760f5f9c-04ad-4788-886d-8f301f2f487b","Type":"ContainerDied","Data":"7369bbf6ffb92ed6df50f217fd356fcf8f8e58fc55a4ad1ee0c13834e0caa4f5"} Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.916957 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7369bbf6ffb92ed6df50f217fd356fcf8f8e58fc55a4ad1ee0c13834e0caa4f5" Jan 26 13:14:42 crc kubenswrapper[4881]: I0126 13:14:42.917037 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.037669 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp"] Jan 26 13:14:43 crc kubenswrapper[4881]: E0126 13:14:43.038161 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760f5f9c-04ad-4788-886d-8f301f2f487b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.038183 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="760f5f9c-04ad-4788-886d-8f301f2f487b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.038420 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="760f5f9c-04ad-4788-886d-8f301f2f487b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.039267 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.041503 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.042290 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.042293 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.042817 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.048830 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.048981 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.054177 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp"] Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.154976 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.155458 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.280980 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281077 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281170 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281278 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281323 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281362 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281390 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281418 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281478 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281547 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s4qd\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-kube-api-access-2s4qd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281626 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.281724 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.282556 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384202 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384291 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384332 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384375 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384451 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384485 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384565 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384599 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384629 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384655 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384678 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384711 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.384748 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s4qd\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-kube-api-access-2s4qd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.390256 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.391163 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.391377 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.392216 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.392596 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.392980 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.394032 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.394359 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.394610 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.396106 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.397229 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.398728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.404249 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.407033 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s4qd\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-kube-api-access-2s4qd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-42rrp\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:43 crc kubenswrapper[4881]: I0126 13:14:43.461706 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:14:44 crc kubenswrapper[4881]: I0126 13:14:44.039008 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp"] Jan 26 13:14:44 crc kubenswrapper[4881]: I0126 13:14:44.945306 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" event={"ID":"0f847b90-9682-4b21-8ccc-646996b89f4f","Type":"ContainerStarted","Data":"7facbe2800857fef610d6cfc9d9398bcca41a8b4db0fe2e62ae6aaf82dbae25d"} Jan 26 13:14:44 crc kubenswrapper[4881]: I0126 13:14:44.945714 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" event={"ID":"0f847b90-9682-4b21-8ccc-646996b89f4f","Type":"ContainerStarted","Data":"5fb24df616483228d9b6d5a358ac6f66e3d9a730b2ad13c559c8f242b5d317d9"} Jan 26 13:14:44 crc kubenswrapper[4881]: I0126 13:14:44.979132 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" podStartSLOduration=1.483572273 podStartE2EDuration="1.979106322s" podCreationTimestamp="2026-01-26 13:14:43 +0000 UTC" firstStartedPulling="2026-01-26 13:14:44.043950291 +0000 UTC m=+2356.523260317" lastFinishedPulling="2026-01-26 13:14:44.53948429 +0000 UTC m=+2357.018794366" observedRunningTime="2026-01-26 13:14:44.970995715 +0000 UTC m=+2357.450305751" watchObservedRunningTime="2026-01-26 13:14:44.979106322 +0000 UTC m=+2357.458416378" Jan 26 13:14:54 crc kubenswrapper[4881]: I0126 13:14:54.789441 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:14:54 crc kubenswrapper[4881]: I0126 13:14:54.790108 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.141001 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp"] Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.143341 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.146377 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.147422 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.152207 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp"] Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.253476 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmql\" (UniqueName: \"kubernetes.io/projected/9f415aaa-154c-4da9-8dd2-a95a009684f6-kube-api-access-fzmql\") pod \"collect-profiles-29490555-vrkwp\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.253594 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f415aaa-154c-4da9-8dd2-a95a009684f6-config-volume\") pod \"collect-profiles-29490555-vrkwp\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.253625 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f415aaa-154c-4da9-8dd2-a95a009684f6-secret-volume\") pod \"collect-profiles-29490555-vrkwp\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.356311 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzmql\" (UniqueName: \"kubernetes.io/projected/9f415aaa-154c-4da9-8dd2-a95a009684f6-kube-api-access-fzmql\") pod \"collect-profiles-29490555-vrkwp\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.357531 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f415aaa-154c-4da9-8dd2-a95a009684f6-config-volume\") pod \"collect-profiles-29490555-vrkwp\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.357606 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f415aaa-154c-4da9-8dd2-a95a009684f6-secret-volume\") pod \"collect-profiles-29490555-vrkwp\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.359056 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f415aaa-154c-4da9-8dd2-a95a009684f6-config-volume\") pod \"collect-profiles-29490555-vrkwp\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.368489 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f415aaa-154c-4da9-8dd2-a95a009684f6-secret-volume\") pod \"collect-profiles-29490555-vrkwp\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.376908 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzmql\" (UniqueName: \"kubernetes.io/projected/9f415aaa-154c-4da9-8dd2-a95a009684f6-kube-api-access-fzmql\") pod \"collect-profiles-29490555-vrkwp\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.473600 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:00 crc kubenswrapper[4881]: I0126 13:15:00.938190 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp"] Jan 26 13:15:00 crc kubenswrapper[4881]: W0126 13:15:00.944852 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f415aaa_154c_4da9_8dd2_a95a009684f6.slice/crio-9b263056850addbe2ffa57852a70b290a82f4040ab19ce54e0cfa23030c76cd4 WatchSource:0}: Error finding container 9b263056850addbe2ffa57852a70b290a82f4040ab19ce54e0cfa23030c76cd4: Status 404 returned error can't find the container with id 9b263056850addbe2ffa57852a70b290a82f4040ab19ce54e0cfa23030c76cd4 Jan 26 13:15:01 crc kubenswrapper[4881]: I0126 13:15:01.100430 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" event={"ID":"9f415aaa-154c-4da9-8dd2-a95a009684f6","Type":"ContainerStarted","Data":"9b263056850addbe2ffa57852a70b290a82f4040ab19ce54e0cfa23030c76cd4"} Jan 26 13:15:02 crc kubenswrapper[4881]: I0126 13:15:02.110665 4881 generic.go:334] "Generic (PLEG): container finished" podID="9f415aaa-154c-4da9-8dd2-a95a009684f6" containerID="28ef0bef1ab055753a7041f45fd552de3dd883e17ca26808138dd379b43361a7" exitCode=0 Jan 26 13:15:02 crc kubenswrapper[4881]: I0126 13:15:02.110931 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" event={"ID":"9f415aaa-154c-4da9-8dd2-a95a009684f6","Type":"ContainerDied","Data":"28ef0bef1ab055753a7041f45fd552de3dd883e17ca26808138dd379b43361a7"} Jan 26 13:15:03 crc kubenswrapper[4881]: I0126 13:15:03.455816 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:03 crc kubenswrapper[4881]: I0126 13:15:03.635208 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f415aaa-154c-4da9-8dd2-a95a009684f6-secret-volume\") pod \"9f415aaa-154c-4da9-8dd2-a95a009684f6\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " Jan 26 13:15:03 crc kubenswrapper[4881]: I0126 13:15:03.635755 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f415aaa-154c-4da9-8dd2-a95a009684f6-config-volume\") pod \"9f415aaa-154c-4da9-8dd2-a95a009684f6\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " Jan 26 13:15:03 crc kubenswrapper[4881]: I0126 13:15:03.635949 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzmql\" (UniqueName: \"kubernetes.io/projected/9f415aaa-154c-4da9-8dd2-a95a009684f6-kube-api-access-fzmql\") pod \"9f415aaa-154c-4da9-8dd2-a95a009684f6\" (UID: \"9f415aaa-154c-4da9-8dd2-a95a009684f6\") " Jan 26 13:15:03 crc kubenswrapper[4881]: I0126 13:15:03.636599 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f415aaa-154c-4da9-8dd2-a95a009684f6-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f415aaa-154c-4da9-8dd2-a95a009684f6" (UID: "9f415aaa-154c-4da9-8dd2-a95a009684f6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:15:03 crc kubenswrapper[4881]: I0126 13:15:03.637052 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f415aaa-154c-4da9-8dd2-a95a009684f6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:03 crc kubenswrapper[4881]: I0126 13:15:03.642743 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f415aaa-154c-4da9-8dd2-a95a009684f6-kube-api-access-fzmql" (OuterVolumeSpecName: "kube-api-access-fzmql") pod "9f415aaa-154c-4da9-8dd2-a95a009684f6" (UID: "9f415aaa-154c-4da9-8dd2-a95a009684f6"). InnerVolumeSpecName "kube-api-access-fzmql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:15:03 crc kubenswrapper[4881]: I0126 13:15:03.643588 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f415aaa-154c-4da9-8dd2-a95a009684f6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9f415aaa-154c-4da9-8dd2-a95a009684f6" (UID: "9f415aaa-154c-4da9-8dd2-a95a009684f6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:03 crc kubenswrapper[4881]: I0126 13:15:03.739059 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f415aaa-154c-4da9-8dd2-a95a009684f6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:03 crc kubenswrapper[4881]: I0126 13:15:03.739122 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzmql\" (UniqueName: \"kubernetes.io/projected/9f415aaa-154c-4da9-8dd2-a95a009684f6-kube-api-access-fzmql\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:04 crc kubenswrapper[4881]: I0126 13:15:04.130457 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" event={"ID":"9f415aaa-154c-4da9-8dd2-a95a009684f6","Type":"ContainerDied","Data":"9b263056850addbe2ffa57852a70b290a82f4040ab19ce54e0cfa23030c76cd4"} Jan 26 13:15:04 crc kubenswrapper[4881]: I0126 13:15:04.130503 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b263056850addbe2ffa57852a70b290a82f4040ab19ce54e0cfa23030c76cd4" Jan 26 13:15:04 crc kubenswrapper[4881]: I0126 13:15:04.130538 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp" Jan 26 13:15:04 crc kubenswrapper[4881]: I0126 13:15:04.545433 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5"] Jan 26 13:15:04 crc kubenswrapper[4881]: I0126 13:15:04.556744 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490510-6h9v5"] Jan 26 13:15:06 crc kubenswrapper[4881]: I0126 13:15:06.096830 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf4018e-9078-4293-8f8e-f6ab7567943a" path="/var/lib/kubelet/pods/3bf4018e-9078-4293-8f8e-f6ab7567943a/volumes" Jan 26 13:15:24 crc kubenswrapper[4881]: I0126 13:15:24.790071 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:15:24 crc kubenswrapper[4881]: I0126 13:15:24.790962 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:15:26 crc kubenswrapper[4881]: I0126 13:15:26.352293 4881 generic.go:334] "Generic (PLEG): container finished" podID="0f847b90-9682-4b21-8ccc-646996b89f4f" containerID="7facbe2800857fef610d6cfc9d9398bcca41a8b4db0fe2e62ae6aaf82dbae25d" exitCode=0 Jan 26 13:15:26 crc kubenswrapper[4881]: I0126 13:15:26.352361 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" event={"ID":"0f847b90-9682-4b21-8ccc-646996b89f4f","Type":"ContainerDied","Data":"7facbe2800857fef610d6cfc9d9398bcca41a8b4db0fe2e62ae6aaf82dbae25d"} Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.760063 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.901735 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-nova-combined-ca-bundle\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.901871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ssh-key-openstack-edpm-ipam\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.901927 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.901973 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ovn-combined-ca-bundle\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.902100 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-telemetry-combined-ca-bundle\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.902165 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-libvirt-combined-ca-bundle\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.902207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.902989 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-inventory\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.903140 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-bootstrap-combined-ca-bundle\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.903243 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.903702 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.903775 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-neutron-metadata-combined-ca-bundle\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.903824 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s4qd\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-kube-api-access-2s4qd\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.903853 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-repo-setup-combined-ca-bundle\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.908857 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.910939 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.911055 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.911081 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.911179 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.911596 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.912198 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.913093 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.913137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.913406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.914782 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.915219 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-kube-api-access-2s4qd" (OuterVolumeSpecName: "kube-api-access-2s4qd") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "kube-api-access-2s4qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:15:27 crc kubenswrapper[4881]: E0126 13:15:27.960959 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-inventory podName:0f847b90-9682-4b21-8ccc-646996b89f4f nodeName:}" failed. No retries permitted until 2026-01-26 13:15:28.460925911 +0000 UTC m=+2400.940235937 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-inventory") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f") : error deleting /var/lib/kubelet/pods/0f847b90-9682-4b21-8ccc-646996b89f4f/volume-subpaths: remove /var/lib/kubelet/pods/0f847b90-9682-4b21-8ccc-646996b89f4f/volume-subpaths: no such file or directory Jan 26 13:15:27 crc kubenswrapper[4881]: I0126 13:15:27.965719 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009263 4881 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009333 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009364 4881 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009394 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009425 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009453 4881 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009568 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s4qd\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-kube-api-access-2s4qd\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009602 4881 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009629 4881 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009654 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009681 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0f847b90-9682-4b21-8ccc-646996b89f4f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009707 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.009732 4881 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.374262 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" event={"ID":"0f847b90-9682-4b21-8ccc-646996b89f4f","Type":"ContainerDied","Data":"5fb24df616483228d9b6d5a358ac6f66e3d9a730b2ad13c559c8f242b5d317d9"} Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.374610 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fb24df616483228d9b6d5a358ac6f66e3d9a730b2ad13c559c8f242b5d317d9" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.374344 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-42rrp" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.498457 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b"] Jan 26 13:15:28 crc kubenswrapper[4881]: E0126 13:15:28.498878 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f415aaa-154c-4da9-8dd2-a95a009684f6" containerName="collect-profiles" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.498902 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f415aaa-154c-4da9-8dd2-a95a009684f6" containerName="collect-profiles" Jan 26 13:15:28 crc kubenswrapper[4881]: E0126 13:15:28.498930 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f847b90-9682-4b21-8ccc-646996b89f4f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.498940 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f847b90-9682-4b21-8ccc-646996b89f4f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.499214 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f847b90-9682-4b21-8ccc-646996b89f4f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.499249 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f415aaa-154c-4da9-8dd2-a95a009684f6" containerName="collect-profiles" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.499975 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.504696 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.522850 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-inventory\") pod \"0f847b90-9682-4b21-8ccc-646996b89f4f\" (UID: \"0f847b90-9682-4b21-8ccc-646996b89f4f\") " Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.533070 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-inventory" (OuterVolumeSpecName: "inventory") pod "0f847b90-9682-4b21-8ccc-646996b89f4f" (UID: "0f847b90-9682-4b21-8ccc-646996b89f4f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.544076 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b"] Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.629475 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.629610 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8wp\" (UniqueName: \"kubernetes.io/projected/b36ca725-d6f7-4551-84f6-e912cdc75a5f-kube-api-access-fb8wp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.629631 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.629664 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.629835 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.629915 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f847b90-9682-4b21-8ccc-646996b89f4f-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.731999 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb8wp\" (UniqueName: \"kubernetes.io/projected/b36ca725-d6f7-4551-84f6-e912cdc75a5f-kube-api-access-fb8wp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.732043 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.732082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.732180 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.732238 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.733205 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.740623 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.749716 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.750292 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.750348 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb8wp\" (UniqueName: \"kubernetes.io/projected/b36ca725-d6f7-4551-84f6-e912cdc75a5f-kube-api-access-fb8wp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltf5b\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:28 crc kubenswrapper[4881]: I0126 13:15:28.835810 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:15:29 crc kubenswrapper[4881]: I0126 13:15:29.376756 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b"] Jan 26 13:15:30 crc kubenswrapper[4881]: I0126 13:15:30.398055 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" event={"ID":"b36ca725-d6f7-4551-84f6-e912cdc75a5f","Type":"ContainerStarted","Data":"a6e48ba69d6373a304640bc6afa6f1d73f8d036f85741398f340e2b1ddc7945b"} Jan 26 13:15:30 crc kubenswrapper[4881]: I0126 13:15:30.398378 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" event={"ID":"b36ca725-d6f7-4551-84f6-e912cdc75a5f","Type":"ContainerStarted","Data":"54190e1a550835ee98ee8d5ab3e884432bbc95c805a005b4309c9bf40ea069a3"} Jan 26 13:15:30 crc kubenswrapper[4881]: I0126 13:15:30.418319 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" podStartSLOduration=1.7406959880000001 podStartE2EDuration="2.418290593s" podCreationTimestamp="2026-01-26 13:15:28 +0000 UTC" firstStartedPulling="2026-01-26 13:15:29.381799473 +0000 UTC m=+2401.861109499" lastFinishedPulling="2026-01-26 13:15:30.059394068 +0000 UTC m=+2402.538704104" observedRunningTime="2026-01-26 13:15:30.412276887 +0000 UTC m=+2402.891586933" watchObservedRunningTime="2026-01-26 13:15:30.418290593 +0000 UTC m=+2402.897600619" Jan 26 13:15:54 crc kubenswrapper[4881]: I0126 13:15:54.789612 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:15:54 crc kubenswrapper[4881]: I0126 13:15:54.790223 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:15:54 crc kubenswrapper[4881]: I0126 13:15:54.790275 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 13:15:54 crc kubenswrapper[4881]: I0126 13:15:54.791048 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:15:54 crc kubenswrapper[4881]: I0126 13:15:54.791101 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" gracePeriod=600 Jan 26 13:15:54 crc kubenswrapper[4881]: E0126 13:15:54.920013 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:15:55 crc kubenswrapper[4881]: I0126 13:15:55.352704 4881 scope.go:117] "RemoveContainer" containerID="25bb4b559ced09ec8791787a057e14ef76ba787285a6ce95bdbf493b84cdfd08" Jan 26 13:15:55 crc kubenswrapper[4881]: I0126 13:15:55.721512 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" exitCode=0 Jan 26 13:15:55 crc kubenswrapper[4881]: I0126 13:15:55.721838 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c"} Jan 26 13:15:55 crc kubenswrapper[4881]: I0126 13:15:55.721980 4881 scope.go:117] "RemoveContainer" containerID="7750edf22d4cbb66ace9f47e5c6d27c40449613083712f392ec90e1aada14c4a" Jan 26 13:15:55 crc kubenswrapper[4881]: I0126 13:15:55.722861 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:15:55 crc kubenswrapper[4881]: E0126 13:15:55.723431 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:16:09 crc kubenswrapper[4881]: I0126 13:16:09.082871 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:16:09 crc kubenswrapper[4881]: E0126 13:16:09.083655 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:16:22 crc kubenswrapper[4881]: I0126 13:16:22.082458 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:16:22 crc kubenswrapper[4881]: E0126 13:16:22.083244 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:16:36 crc kubenswrapper[4881]: I0126 13:16:36.082094 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:16:36 crc kubenswrapper[4881]: E0126 13:16:36.082811 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:16:42 crc kubenswrapper[4881]: I0126 13:16:42.149445 4881 generic.go:334] "Generic (PLEG): container finished" podID="b36ca725-d6f7-4551-84f6-e912cdc75a5f" containerID="a6e48ba69d6373a304640bc6afa6f1d73f8d036f85741398f340e2b1ddc7945b" exitCode=0 Jan 26 13:16:42 crc kubenswrapper[4881]: I0126 13:16:42.149571 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" event={"ID":"b36ca725-d6f7-4551-84f6-e912cdc75a5f","Type":"ContainerDied","Data":"a6e48ba69d6373a304640bc6afa6f1d73f8d036f85741398f340e2b1ddc7945b"} Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.600894 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.735197 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-inventory\") pod \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.735643 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovncontroller-config-0\") pod \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.735891 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovn-combined-ca-bundle\") pod \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.736057 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ssh-key-openstack-edpm-ipam\") pod \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.736287 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb8wp\" (UniqueName: \"kubernetes.io/projected/b36ca725-d6f7-4551-84f6-e912cdc75a5f-kube-api-access-fb8wp\") pod \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.746865 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b36ca725-d6f7-4551-84f6-e912cdc75a5f" (UID: "b36ca725-d6f7-4551-84f6-e912cdc75a5f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.746931 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36ca725-d6f7-4551-84f6-e912cdc75a5f-kube-api-access-fb8wp" (OuterVolumeSpecName: "kube-api-access-fb8wp") pod "b36ca725-d6f7-4551-84f6-e912cdc75a5f" (UID: "b36ca725-d6f7-4551-84f6-e912cdc75a5f"). InnerVolumeSpecName "kube-api-access-fb8wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.765134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b36ca725-d6f7-4551-84f6-e912cdc75a5f" (UID: "b36ca725-d6f7-4551-84f6-e912cdc75a5f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:16:43 crc kubenswrapper[4881]: E0126 13:16:43.770320 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ssh-key-openstack-edpm-ipam podName:b36ca725-d6f7-4551-84f6-e912cdc75a5f nodeName:}" failed. No retries permitted until 2026-01-26 13:16:44.270267292 +0000 UTC m=+2476.749577318 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ssh-key-openstack-edpm-ipam") pod "b36ca725-d6f7-4551-84f6-e912cdc75a5f" (UID: "b36ca725-d6f7-4551-84f6-e912cdc75a5f") : error deleting /var/lib/kubelet/pods/b36ca725-d6f7-4551-84f6-e912cdc75a5f/volume-subpaths: remove /var/lib/kubelet/pods/b36ca725-d6f7-4551-84f6-e912cdc75a5f/volume-subpaths: no such file or directory Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.772767 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-inventory" (OuterVolumeSpecName: "inventory") pod "b36ca725-d6f7-4551-84f6-e912cdc75a5f" (UID: "b36ca725-d6f7-4551-84f6-e912cdc75a5f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.839043 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.839071 4881 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.839082 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:43 crc kubenswrapper[4881]: I0126 13:16:43.839091 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb8wp\" (UniqueName: \"kubernetes.io/projected/b36ca725-d6f7-4551-84f6-e912cdc75a5f-kube-api-access-fb8wp\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.170509 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" event={"ID":"b36ca725-d6f7-4551-84f6-e912cdc75a5f","Type":"ContainerDied","Data":"54190e1a550835ee98ee8d5ab3e884432bbc95c805a005b4309c9bf40ea069a3"} Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.170559 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54190e1a550835ee98ee8d5ab3e884432bbc95c805a005b4309c9bf40ea069a3" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.170675 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltf5b" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.317921 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl"] Jan 26 13:16:44 crc kubenswrapper[4881]: E0126 13:16:44.318599 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36ca725-d6f7-4551-84f6-e912cdc75a5f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.318623 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36ca725-d6f7-4551-84f6-e912cdc75a5f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.318897 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36ca725-d6f7-4551-84f6-e912cdc75a5f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.319745 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.322016 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.322384 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.330140 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl"] Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.348898 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ssh-key-openstack-edpm-ipam\") pod \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\" (UID: \"b36ca725-d6f7-4551-84f6-e912cdc75a5f\") " Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.354631 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b36ca725-d6f7-4551-84f6-e912cdc75a5f" (UID: "b36ca725-d6f7-4551-84f6-e912cdc75a5f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.450695 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xk4f\" (UniqueName: \"kubernetes.io/projected/41c88a4a-b833-417f-90c9-eb0edcf688ec-kube-api-access-8xk4f\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.450817 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.450922 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.450947 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.451019 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.451093 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.451158 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b36ca725-d6f7-4551-84f6-e912cdc75a5f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.553303 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.553392 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.553427 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.553500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.553589 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.553741 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xk4f\" (UniqueName: \"kubernetes.io/projected/41c88a4a-b833-417f-90c9-eb0edcf688ec-kube-api-access-8xk4f\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.558615 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.561156 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.561310 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.562017 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.564952 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.572378 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xk4f\" (UniqueName: \"kubernetes.io/projected/41c88a4a-b833-417f-90c9-eb0edcf688ec-kube-api-access-8xk4f\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:44 crc kubenswrapper[4881]: I0126 13:16:44.645481 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:16:45 crc kubenswrapper[4881]: I0126 13:16:45.210833 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl"] Jan 26 13:16:46 crc kubenswrapper[4881]: I0126 13:16:46.191840 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" event={"ID":"41c88a4a-b833-417f-90c9-eb0edcf688ec","Type":"ContainerStarted","Data":"f6a025f1ddbbf5ba77631bbc39f883dfa34c08e0cf826e4299e1fab870292e27"} Jan 26 13:16:46 crc kubenswrapper[4881]: I0126 13:16:46.192243 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" event={"ID":"41c88a4a-b833-417f-90c9-eb0edcf688ec","Type":"ContainerStarted","Data":"b39e46e8ccc42727990532c4d6791b5a378f93a1dc7595bdf9fa3e81de7213d9"} Jan 26 13:16:46 crc kubenswrapper[4881]: I0126 13:16:46.214602 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" podStartSLOduration=1.5642502820000002 podStartE2EDuration="2.214581916s" podCreationTimestamp="2026-01-26 13:16:44 +0000 UTC" firstStartedPulling="2026-01-26 13:16:45.20913631 +0000 UTC m=+2477.688446336" lastFinishedPulling="2026-01-26 13:16:45.859467944 +0000 UTC m=+2478.338777970" observedRunningTime="2026-01-26 13:16:46.206106881 +0000 UTC m=+2478.685416907" watchObservedRunningTime="2026-01-26 13:16:46.214581916 +0000 UTC m=+2478.693891942" Jan 26 13:16:48 crc kubenswrapper[4881]: I0126 13:16:48.092707 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:16:48 crc kubenswrapper[4881]: E0126 13:16:48.093397 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:17:00 crc kubenswrapper[4881]: I0126 13:17:00.082327 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:17:00 crc kubenswrapper[4881]: E0126 13:17:00.083167 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:17:15 crc kubenswrapper[4881]: I0126 13:17:15.083439 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:17:15 crc kubenswrapper[4881]: E0126 13:17:15.084361 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:17:27 crc kubenswrapper[4881]: I0126 13:17:27.083155 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:17:27 crc kubenswrapper[4881]: E0126 13:17:27.084008 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:17:39 crc kubenswrapper[4881]: I0126 13:17:39.083367 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:17:39 crc kubenswrapper[4881]: E0126 13:17:39.084177 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:17:39 crc kubenswrapper[4881]: E0126 13:17:39.138708 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41c88a4a_b833_417f_90c9_eb0edcf688ec.slice/crio-f6a025f1ddbbf5ba77631bbc39f883dfa34c08e0cf826e4299e1fab870292e27.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41c88a4a_b833_417f_90c9_eb0edcf688ec.slice/crio-conmon-f6a025f1ddbbf5ba77631bbc39f883dfa34c08e0cf826e4299e1fab870292e27.scope\": RecentStats: unable to find data in memory cache]" Jan 26 13:17:39 crc kubenswrapper[4881]: I0126 13:17:39.729586 4881 generic.go:334] "Generic (PLEG): container finished" podID="41c88a4a-b833-417f-90c9-eb0edcf688ec" containerID="f6a025f1ddbbf5ba77631bbc39f883dfa34c08e0cf826e4299e1fab870292e27" exitCode=0 Jan 26 13:17:39 crc kubenswrapper[4881]: I0126 13:17:39.729639 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" event={"ID":"41c88a4a-b833-417f-90c9-eb0edcf688ec","Type":"ContainerDied","Data":"f6a025f1ddbbf5ba77631bbc39f883dfa34c08e0cf826e4299e1fab870292e27"} Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.284182 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.414347 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-metadata-combined-ca-bundle\") pod \"41c88a4a-b833-417f-90c9-eb0edcf688ec\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.414963 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-inventory\") pod \"41c88a4a-b833-417f-90c9-eb0edcf688ec\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.415324 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xk4f\" (UniqueName: \"kubernetes.io/projected/41c88a4a-b833-417f-90c9-eb0edcf688ec-kube-api-access-8xk4f\") pod \"41c88a4a-b833-417f-90c9-eb0edcf688ec\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.415698 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-nova-metadata-neutron-config-0\") pod \"41c88a4a-b833-417f-90c9-eb0edcf688ec\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.416651 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-ovn-metadata-agent-neutron-config-0\") pod \"41c88a4a-b833-417f-90c9-eb0edcf688ec\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.417002 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-ssh-key-openstack-edpm-ipam\") pod \"41c88a4a-b833-417f-90c9-eb0edcf688ec\" (UID: \"41c88a4a-b833-417f-90c9-eb0edcf688ec\") " Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.422849 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c88a4a-b833-417f-90c9-eb0edcf688ec-kube-api-access-8xk4f" (OuterVolumeSpecName: "kube-api-access-8xk4f") pod "41c88a4a-b833-417f-90c9-eb0edcf688ec" (UID: "41c88a4a-b833-417f-90c9-eb0edcf688ec"). InnerVolumeSpecName "kube-api-access-8xk4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.429474 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "41c88a4a-b833-417f-90c9-eb0edcf688ec" (UID: "41c88a4a-b833-417f-90c9-eb0edcf688ec"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.451169 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-inventory" (OuterVolumeSpecName: "inventory") pod "41c88a4a-b833-417f-90c9-eb0edcf688ec" (UID: "41c88a4a-b833-417f-90c9-eb0edcf688ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.462899 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "41c88a4a-b833-417f-90c9-eb0edcf688ec" (UID: "41c88a4a-b833-417f-90c9-eb0edcf688ec"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.463337 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "41c88a4a-b833-417f-90c9-eb0edcf688ec" (UID: "41c88a4a-b833-417f-90c9-eb0edcf688ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.469183 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "41c88a4a-b833-417f-90c9-eb0edcf688ec" (UID: "41c88a4a-b833-417f-90c9-eb0edcf688ec"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.520374 4881 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.520421 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.520439 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xk4f\" (UniqueName: \"kubernetes.io/projected/41c88a4a-b833-417f-90c9-eb0edcf688ec-kube-api-access-8xk4f\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.520458 4881 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.520471 4881 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.520486 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41c88a4a-b833-417f-90c9-eb0edcf688ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.765623 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" event={"ID":"41c88a4a-b833-417f-90c9-eb0edcf688ec","Type":"ContainerDied","Data":"b39e46e8ccc42727990532c4d6791b5a378f93a1dc7595bdf9fa3e81de7213d9"} Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.765671 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39e46e8ccc42727990532c4d6791b5a378f93a1dc7595bdf9fa3e81de7213d9" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.765726 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.871249 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8"] Jan 26 13:17:41 crc kubenswrapper[4881]: E0126 13:17:41.871698 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c88a4a-b833-417f-90c9-eb0edcf688ec" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.871717 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c88a4a-b833-417f-90c9-eb0edcf688ec" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.871922 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c88a4a-b833-417f-90c9-eb0edcf688ec" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.872727 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.878617 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.879300 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.879592 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.879717 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.882424 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.884864 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8"] Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.930170 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.930394 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.930462 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.930885 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9l4n\" (UniqueName: \"kubernetes.io/projected/2190fb1e-77a2-47d2-a0bb-2aaca7948653-kube-api-access-n9l4n\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:41 crc kubenswrapper[4881]: I0126 13:17:41.930930 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.032378 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9l4n\" (UniqueName: \"kubernetes.io/projected/2190fb1e-77a2-47d2-a0bb-2aaca7948653-kube-api-access-n9l4n\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.032435 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.032487 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.032582 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.032620 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.036692 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.037214 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.039045 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.054997 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.056309 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9l4n\" (UniqueName: \"kubernetes.io/projected/2190fb1e-77a2-47d2-a0bb-2aaca7948653-kube-api-access-n9l4n\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:42 crc kubenswrapper[4881]: I0126 13:17:42.191940 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:17:43 crc kubenswrapper[4881]: I0126 13:17:43.194203 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8"] Jan 26 13:17:43 crc kubenswrapper[4881]: W0126 13:17:43.205795 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2190fb1e_77a2_47d2_a0bb_2aaca7948653.slice/crio-c97d2a66cee8d56461463c900872d02df11665b4ada28bdf571511d1f12179a6 WatchSource:0}: Error finding container c97d2a66cee8d56461463c900872d02df11665b4ada28bdf571511d1f12179a6: Status 404 returned error can't find the container with id c97d2a66cee8d56461463c900872d02df11665b4ada28bdf571511d1f12179a6 Jan 26 13:17:43 crc kubenswrapper[4881]: I0126 13:17:43.795641 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" event={"ID":"2190fb1e-77a2-47d2-a0bb-2aaca7948653","Type":"ContainerStarted","Data":"c97d2a66cee8d56461463c900872d02df11665b4ada28bdf571511d1f12179a6"} Jan 26 13:17:44 crc kubenswrapper[4881]: I0126 13:17:44.811362 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" event={"ID":"2190fb1e-77a2-47d2-a0bb-2aaca7948653","Type":"ContainerStarted","Data":"b8899421ce0a78dfebee6ad8754283d5630b1539c01ac56c52fc665322afebf0"} Jan 26 13:17:44 crc kubenswrapper[4881]: I0126 13:17:44.836386 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" podStartSLOduration=3.290920683 podStartE2EDuration="3.836366093s" podCreationTimestamp="2026-01-26 13:17:41 +0000 UTC" firstStartedPulling="2026-01-26 13:17:43.214126546 +0000 UTC m=+2535.693436622" lastFinishedPulling="2026-01-26 13:17:43.759571976 +0000 UTC m=+2536.238882032" observedRunningTime="2026-01-26 13:17:44.831882435 +0000 UTC m=+2537.311192451" watchObservedRunningTime="2026-01-26 13:17:44.836366093 +0000 UTC m=+2537.315676109" Jan 26 13:17:52 crc kubenswrapper[4881]: I0126 13:17:52.084057 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:17:52 crc kubenswrapper[4881]: E0126 13:17:52.085154 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:18:06 crc kubenswrapper[4881]: I0126 13:18:06.083577 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:18:06 crc kubenswrapper[4881]: E0126 13:18:06.084430 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:18:18 crc kubenswrapper[4881]: I0126 13:18:18.098430 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:18:18 crc kubenswrapper[4881]: E0126 13:18:18.099718 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:18:32 crc kubenswrapper[4881]: I0126 13:18:32.083474 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:18:32 crc kubenswrapper[4881]: E0126 13:18:32.084663 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.528209 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4dqjq"] Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.531752 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.540603 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dqjq"] Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.623609 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-catalog-content\") pod \"redhat-operators-4dqjq\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.623721 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9td9d\" (UniqueName: \"kubernetes.io/projected/fa570e7b-0e6f-4836-b306-8d56939032e5-kube-api-access-9td9d\") pod \"redhat-operators-4dqjq\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.623988 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-utilities\") pod \"redhat-operators-4dqjq\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.725702 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-catalog-content\") pod \"redhat-operators-4dqjq\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.725741 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9td9d\" (UniqueName: \"kubernetes.io/projected/fa570e7b-0e6f-4836-b306-8d56939032e5-kube-api-access-9td9d\") pod \"redhat-operators-4dqjq\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.725801 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-utilities\") pod \"redhat-operators-4dqjq\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.726326 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-catalog-content\") pod \"redhat-operators-4dqjq\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.726374 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-utilities\") pod \"redhat-operators-4dqjq\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.756101 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9td9d\" (UniqueName: \"kubernetes.io/projected/fa570e7b-0e6f-4836-b306-8d56939032e5-kube-api-access-9td9d\") pod \"redhat-operators-4dqjq\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:36 crc kubenswrapper[4881]: I0126 13:18:36.855228 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:37 crc kubenswrapper[4881]: I0126 13:18:37.319273 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dqjq"] Jan 26 13:18:37 crc kubenswrapper[4881]: I0126 13:18:37.383149 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqjq" event={"ID":"fa570e7b-0e6f-4836-b306-8d56939032e5","Type":"ContainerStarted","Data":"15d6854b3ba442a701b559087070eac3e1f13634bdd79c384154335b4616a65a"} Jan 26 13:18:38 crc kubenswrapper[4881]: I0126 13:18:38.402105 4881 generic.go:334] "Generic (PLEG): container finished" podID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerID="b234f5d090d116aaafcf991964179ff25c52532c05fee381acd2cf9edc35bcb2" exitCode=0 Jan 26 13:18:38 crc kubenswrapper[4881]: I0126 13:18:38.402275 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqjq" event={"ID":"fa570e7b-0e6f-4836-b306-8d56939032e5","Type":"ContainerDied","Data":"b234f5d090d116aaafcf991964179ff25c52532c05fee381acd2cf9edc35bcb2"} Jan 26 13:18:39 crc kubenswrapper[4881]: I0126 13:18:39.413554 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqjq" event={"ID":"fa570e7b-0e6f-4836-b306-8d56939032e5","Type":"ContainerStarted","Data":"be2708c1ad6be3c317bb626816104a53434f8e81f3f87b765e1e9192915d6101"} Jan 26 13:18:42 crc kubenswrapper[4881]: I0126 13:18:42.443763 4881 generic.go:334] "Generic (PLEG): container finished" podID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerID="be2708c1ad6be3c317bb626816104a53434f8e81f3f87b765e1e9192915d6101" exitCode=0 Jan 26 13:18:42 crc kubenswrapper[4881]: I0126 13:18:42.443885 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqjq" event={"ID":"fa570e7b-0e6f-4836-b306-8d56939032e5","Type":"ContainerDied","Data":"be2708c1ad6be3c317bb626816104a53434f8e81f3f87b765e1e9192915d6101"} Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.718664 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qcwb6"] Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.725937 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.750715 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qcwb6"] Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.881724 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-utilities\") pod \"community-operators-qcwb6\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.882081 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hr24\" (UniqueName: \"kubernetes.io/projected/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-kube-api-access-4hr24\") pod \"community-operators-qcwb6\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.882240 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-catalog-content\") pod \"community-operators-qcwb6\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.903845 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bkfmg"] Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.906502 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.913981 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkfmg"] Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.984029 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-utilities\") pod \"community-operators-qcwb6\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.984314 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hr24\" (UniqueName: \"kubernetes.io/projected/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-kube-api-access-4hr24\") pod \"community-operators-qcwb6\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.984374 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-catalog-content\") pod \"community-operators-qcwb6\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.985195 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-catalog-content\") pod \"community-operators-qcwb6\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:43 crc kubenswrapper[4881]: I0126 13:18:43.985432 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-utilities\") pod \"community-operators-qcwb6\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.002648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hr24\" (UniqueName: \"kubernetes.io/projected/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-kube-api-access-4hr24\") pod \"community-operators-qcwb6\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.063062 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.086016 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-catalog-content\") pod \"redhat-marketplace-bkfmg\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.086131 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-utilities\") pod \"redhat-marketplace-bkfmg\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.086175 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzw6t\" (UniqueName: \"kubernetes.io/projected/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-kube-api-access-lzw6t\") pod \"redhat-marketplace-bkfmg\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.187576 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-utilities\") pod \"redhat-marketplace-bkfmg\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.187676 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzw6t\" (UniqueName: \"kubernetes.io/projected/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-kube-api-access-lzw6t\") pod \"redhat-marketplace-bkfmg\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.187810 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-catalog-content\") pod \"redhat-marketplace-bkfmg\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.188027 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-utilities\") pod \"redhat-marketplace-bkfmg\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.188423 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-catalog-content\") pod \"redhat-marketplace-bkfmg\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.210157 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzw6t\" (UniqueName: \"kubernetes.io/projected/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-kube-api-access-lzw6t\") pod \"redhat-marketplace-bkfmg\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.284100 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.679105 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qcwb6"] Jan 26 13:18:44 crc kubenswrapper[4881]: I0126 13:18:44.816608 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkfmg"] Jan 26 13:18:44 crc kubenswrapper[4881]: W0126 13:18:44.819300 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeccb61b2_41fd_49ac_ae31_44a4b1aceff0.slice/crio-7b3f7d2ead956af4a57283eabee8d3882072ae7addf7d5ff95e0af3ad5b53950 WatchSource:0}: Error finding container 7b3f7d2ead956af4a57283eabee8d3882072ae7addf7d5ff95e0af3ad5b53950: Status 404 returned error can't find the container with id 7b3f7d2ead956af4a57283eabee8d3882072ae7addf7d5ff95e0af3ad5b53950 Jan 26 13:18:45 crc kubenswrapper[4881]: I0126 13:18:45.500421 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkfmg" event={"ID":"eccb61b2-41fd-49ac-ae31-44a4b1aceff0","Type":"ContainerStarted","Data":"7b3f7d2ead956af4a57283eabee8d3882072ae7addf7d5ff95e0af3ad5b53950"} Jan 26 13:18:45 crc kubenswrapper[4881]: I0126 13:18:45.503773 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qcwb6" event={"ID":"1ec7e79e-fdfe-46af-9508-9953cf5b65c5","Type":"ContainerStarted","Data":"57fe0e95588c4d97e9e7afd422715ae6ded1a51d572900066896e4b47190e15a"} Jan 26 13:18:45 crc kubenswrapper[4881]: I0126 13:18:45.503889 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qcwb6" event={"ID":"1ec7e79e-fdfe-46af-9508-9953cf5b65c5","Type":"ContainerStarted","Data":"473b93cbd0e74ba8e9d77eaa7f83ad69c2571a48e784d6913e33dd843712d210"} Jan 26 13:18:46 crc kubenswrapper[4881]: I0126 13:18:46.518116 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqjq" event={"ID":"fa570e7b-0e6f-4836-b306-8d56939032e5","Type":"ContainerStarted","Data":"c8ad72d6795b507c668e4ec4b84845d693f62d4d1734f51f326d5b9abb390479"} Jan 26 13:18:46 crc kubenswrapper[4881]: I0126 13:18:46.519395 4881 generic.go:334] "Generic (PLEG): container finished" podID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerID="cac796995c96be7f4717a83513ee69a9fe4650803350fcb1f109b2fc70369061" exitCode=0 Jan 26 13:18:46 crc kubenswrapper[4881]: I0126 13:18:46.519454 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkfmg" event={"ID":"eccb61b2-41fd-49ac-ae31-44a4b1aceff0","Type":"ContainerDied","Data":"cac796995c96be7f4717a83513ee69a9fe4650803350fcb1f109b2fc70369061"} Jan 26 13:18:46 crc kubenswrapper[4881]: I0126 13:18:46.524421 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerID="57fe0e95588c4d97e9e7afd422715ae6ded1a51d572900066896e4b47190e15a" exitCode=0 Jan 26 13:18:46 crc kubenswrapper[4881]: I0126 13:18:46.524487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qcwb6" event={"ID":"1ec7e79e-fdfe-46af-9508-9953cf5b65c5","Type":"ContainerDied","Data":"57fe0e95588c4d97e9e7afd422715ae6ded1a51d572900066896e4b47190e15a"} Jan 26 13:18:46 crc kubenswrapper[4881]: I0126 13:18:46.563384 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4dqjq" podStartSLOduration=3.099340018 podStartE2EDuration="10.563358103s" podCreationTimestamp="2026-01-26 13:18:36 +0000 UTC" firstStartedPulling="2026-01-26 13:18:38.405352395 +0000 UTC m=+2590.884662461" lastFinishedPulling="2026-01-26 13:18:45.86937052 +0000 UTC m=+2598.348680546" observedRunningTime="2026-01-26 13:18:46.550624294 +0000 UTC m=+2599.029934320" watchObservedRunningTime="2026-01-26 13:18:46.563358103 +0000 UTC m=+2599.042668129" Jan 26 13:18:46 crc kubenswrapper[4881]: I0126 13:18:46.856151 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:46 crc kubenswrapper[4881]: I0126 13:18:46.856208 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:18:47 crc kubenswrapper[4881]: I0126 13:18:47.082935 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:18:47 crc kubenswrapper[4881]: E0126 13:18:47.083426 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:18:47 crc kubenswrapper[4881]: I0126 13:18:47.934405 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dqjq" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerName="registry-server" probeResult="failure" output=< Jan 26 13:18:47 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 13:18:47 crc kubenswrapper[4881]: > Jan 26 13:18:50 crc kubenswrapper[4881]: I0126 13:18:50.567800 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkfmg" event={"ID":"eccb61b2-41fd-49ac-ae31-44a4b1aceff0","Type":"ContainerStarted","Data":"d2f2fc1e710007c5f8f663d1e66e2721a359be48bbda69a049c1d1b6e1262c88"} Jan 26 13:18:50 crc kubenswrapper[4881]: I0126 13:18:50.572059 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qcwb6" event={"ID":"1ec7e79e-fdfe-46af-9508-9953cf5b65c5","Type":"ContainerStarted","Data":"2d6a0850800a94a9d4922f8e8409ba511bec0aef3e493489fa9a041d74b8f451"} Jan 26 13:18:53 crc kubenswrapper[4881]: I0126 13:18:53.610336 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerID="2d6a0850800a94a9d4922f8e8409ba511bec0aef3e493489fa9a041d74b8f451" exitCode=0 Jan 26 13:18:53 crc kubenswrapper[4881]: I0126 13:18:53.610406 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qcwb6" event={"ID":"1ec7e79e-fdfe-46af-9508-9953cf5b65c5","Type":"ContainerDied","Data":"2d6a0850800a94a9d4922f8e8409ba511bec0aef3e493489fa9a041d74b8f451"} Jan 26 13:18:54 crc kubenswrapper[4881]: I0126 13:18:54.629178 4881 generic.go:334] "Generic (PLEG): container finished" podID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerID="d2f2fc1e710007c5f8f663d1e66e2721a359be48bbda69a049c1d1b6e1262c88" exitCode=0 Jan 26 13:18:54 crc kubenswrapper[4881]: I0126 13:18:54.629269 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkfmg" event={"ID":"eccb61b2-41fd-49ac-ae31-44a4b1aceff0","Type":"ContainerDied","Data":"d2f2fc1e710007c5f8f663d1e66e2721a359be48bbda69a049c1d1b6e1262c88"} Jan 26 13:18:56 crc kubenswrapper[4881]: I0126 13:18:56.652904 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qcwb6" event={"ID":"1ec7e79e-fdfe-46af-9508-9953cf5b65c5","Type":"ContainerStarted","Data":"f6111639c6aba0ae60c3e941c4af10759e0fabbf14b2d40e02d6d583c7c46e1f"} Jan 26 13:18:56 crc kubenswrapper[4881]: I0126 13:18:56.685339 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qcwb6" podStartSLOduration=4.097667192 podStartE2EDuration="13.685314135s" podCreationTimestamp="2026-01-26 13:18:43 +0000 UTC" firstStartedPulling="2026-01-26 13:18:46.525748531 +0000 UTC m=+2599.005058557" lastFinishedPulling="2026-01-26 13:18:56.113395474 +0000 UTC m=+2608.592705500" observedRunningTime="2026-01-26 13:18:56.680913008 +0000 UTC m=+2609.160223034" watchObservedRunningTime="2026-01-26 13:18:56.685314135 +0000 UTC m=+2609.164624161" Jan 26 13:18:57 crc kubenswrapper[4881]: I0126 13:18:57.667628 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkfmg" event={"ID":"eccb61b2-41fd-49ac-ae31-44a4b1aceff0","Type":"ContainerStarted","Data":"cc4ff2197f9657813e38d605b802c4b1b28bc0863afaa2dd8ffa6784cb148f8c"} Jan 26 13:18:57 crc kubenswrapper[4881]: I0126 13:18:57.693161 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bkfmg" podStartSLOduration=4.586296043 podStartE2EDuration="14.693136749s" podCreationTimestamp="2026-01-26 13:18:43 +0000 UTC" firstStartedPulling="2026-01-26 13:18:46.520857532 +0000 UTC m=+2599.000167548" lastFinishedPulling="2026-01-26 13:18:56.627698228 +0000 UTC m=+2609.107008254" observedRunningTime="2026-01-26 13:18:57.684050639 +0000 UTC m=+2610.163360675" watchObservedRunningTime="2026-01-26 13:18:57.693136749 +0000 UTC m=+2610.172446775" Jan 26 13:18:57 crc kubenswrapper[4881]: I0126 13:18:57.903032 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dqjq" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerName="registry-server" probeResult="failure" output=< Jan 26 13:18:57 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 13:18:57 crc kubenswrapper[4881]: > Jan 26 13:19:02 crc kubenswrapper[4881]: I0126 13:19:02.082681 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:19:02 crc kubenswrapper[4881]: E0126 13:19:02.083737 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:19:04 crc kubenswrapper[4881]: I0126 13:19:04.063640 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:19:04 crc kubenswrapper[4881]: I0126 13:19:04.064017 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:19:04 crc kubenswrapper[4881]: I0126 13:19:04.140043 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:19:04 crc kubenswrapper[4881]: I0126 13:19:04.285107 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:19:04 crc kubenswrapper[4881]: I0126 13:19:04.285167 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:19:04 crc kubenswrapper[4881]: I0126 13:19:04.361553 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:19:04 crc kubenswrapper[4881]: I0126 13:19:04.782379 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:19:04 crc kubenswrapper[4881]: I0126 13:19:04.798195 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:19:06 crc kubenswrapper[4881]: I0126 13:19:05.999959 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qcwb6"] Jan 26 13:19:06 crc kubenswrapper[4881]: I0126 13:19:06.759468 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qcwb6" podUID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerName="registry-server" containerID="cri-o://f6111639c6aba0ae60c3e941c4af10759e0fabbf14b2d40e02d6d583c7c46e1f" gracePeriod=2 Jan 26 13:19:06 crc kubenswrapper[4881]: I0126 13:19:06.924035 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:19:06 crc kubenswrapper[4881]: I0126 13:19:06.986116 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkfmg"] Jan 26 13:19:06 crc kubenswrapper[4881]: I0126 13:19:06.986352 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bkfmg" podUID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerName="registry-server" containerID="cri-o://cc4ff2197f9657813e38d605b802c4b1b28bc0863afaa2dd8ffa6784cb148f8c" gracePeriod=2 Jan 26 13:19:06 crc kubenswrapper[4881]: I0126 13:19:06.999482 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:19:07 crc kubenswrapper[4881]: I0126 13:19:07.777187 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkfmg" event={"ID":"eccb61b2-41fd-49ac-ae31-44a4b1aceff0","Type":"ContainerDied","Data":"cc4ff2197f9657813e38d605b802c4b1b28bc0863afaa2dd8ffa6784cb148f8c"} Jan 26 13:19:07 crc kubenswrapper[4881]: I0126 13:19:07.777107 4881 generic.go:334] "Generic (PLEG): container finished" podID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerID="cc4ff2197f9657813e38d605b802c4b1b28bc0863afaa2dd8ffa6784cb148f8c" exitCode=0 Jan 26 13:19:07 crc kubenswrapper[4881]: I0126 13:19:07.781357 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerID="f6111639c6aba0ae60c3e941c4af10759e0fabbf14b2d40e02d6d583c7c46e1f" exitCode=0 Jan 26 13:19:07 crc kubenswrapper[4881]: I0126 13:19:07.781423 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qcwb6" event={"ID":"1ec7e79e-fdfe-46af-9508-9953cf5b65c5","Type":"ContainerDied","Data":"f6111639c6aba0ae60c3e941c4af10759e0fabbf14b2d40e02d6d583c7c46e1f"} Jan 26 13:19:07 crc kubenswrapper[4881]: I0126 13:19:07.903029 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:19:07 crc kubenswrapper[4881]: I0126 13:19:07.982311 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-utilities\") pod \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " Jan 26 13:19:07 crc kubenswrapper[4881]: I0126 13:19:07.982504 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hr24\" (UniqueName: \"kubernetes.io/projected/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-kube-api-access-4hr24\") pod \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " Jan 26 13:19:07 crc kubenswrapper[4881]: I0126 13:19:07.982598 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-catalog-content\") pod \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\" (UID: \"1ec7e79e-fdfe-46af-9508-9953cf5b65c5\") " Jan 26 13:19:07 crc kubenswrapper[4881]: I0126 13:19:07.984202 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-utilities" (OuterVolumeSpecName: "utilities") pod "1ec7e79e-fdfe-46af-9508-9953cf5b65c5" (UID: "1ec7e79e-fdfe-46af-9508-9953cf5b65c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:07 crc kubenswrapper[4881]: I0126 13:19:07.988559 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-kube-api-access-4hr24" (OuterVolumeSpecName: "kube-api-access-4hr24") pod "1ec7e79e-fdfe-46af-9508-9953cf5b65c5" (UID: "1ec7e79e-fdfe-46af-9508-9953cf5b65c5"). InnerVolumeSpecName "kube-api-access-4hr24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.035062 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ec7e79e-fdfe-46af-9508-9953cf5b65c5" (UID: "1ec7e79e-fdfe-46af-9508-9953cf5b65c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.066446 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.085022 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.085062 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hr24\" (UniqueName: \"kubernetes.io/projected/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-kube-api-access-4hr24\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.085076 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7e79e-fdfe-46af-9508-9953cf5b65c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.186018 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-utilities\") pod \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.186392 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-catalog-content\") pod \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.186547 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzw6t\" (UniqueName: \"kubernetes.io/projected/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-kube-api-access-lzw6t\") pod \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\" (UID: \"eccb61b2-41fd-49ac-ae31-44a4b1aceff0\") " Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.187492 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-utilities" (OuterVolumeSpecName: "utilities") pod "eccb61b2-41fd-49ac-ae31-44a4b1aceff0" (UID: "eccb61b2-41fd-49ac-ae31-44a4b1aceff0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.192793 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-kube-api-access-lzw6t" (OuterVolumeSpecName: "kube-api-access-lzw6t") pod "eccb61b2-41fd-49ac-ae31-44a4b1aceff0" (UID: "eccb61b2-41fd-49ac-ae31-44a4b1aceff0"). InnerVolumeSpecName "kube-api-access-lzw6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.211900 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eccb61b2-41fd-49ac-ae31-44a4b1aceff0" (UID: "eccb61b2-41fd-49ac-ae31-44a4b1aceff0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.289468 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.289541 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzw6t\" (UniqueName: \"kubernetes.io/projected/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-kube-api-access-lzw6t\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.289564 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccb61b2-41fd-49ac-ae31-44a4b1aceff0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.797631 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkfmg" event={"ID":"eccb61b2-41fd-49ac-ae31-44a4b1aceff0","Type":"ContainerDied","Data":"7b3f7d2ead956af4a57283eabee8d3882072ae7addf7d5ff95e0af3ad5b53950"} Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.797651 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkfmg" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.798086 4881 scope.go:117] "RemoveContainer" containerID="cc4ff2197f9657813e38d605b802c4b1b28bc0863afaa2dd8ffa6784cb148f8c" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.802493 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qcwb6" event={"ID":"1ec7e79e-fdfe-46af-9508-9953cf5b65c5","Type":"ContainerDied","Data":"473b93cbd0e74ba8e9d77eaa7f83ad69c2571a48e784d6913e33dd843712d210"} Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.802697 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qcwb6" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.828258 4881 scope.go:117] "RemoveContainer" containerID="d2f2fc1e710007c5f8f663d1e66e2721a359be48bbda69a049c1d1b6e1262c88" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.851787 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qcwb6"] Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.875597 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qcwb6"] Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.878918 4881 scope.go:117] "RemoveContainer" containerID="cac796995c96be7f4717a83513ee69a9fe4650803350fcb1f109b2fc70369061" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.879750 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkfmg"] Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.889032 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkfmg"] Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.916765 4881 scope.go:117] "RemoveContainer" containerID="f6111639c6aba0ae60c3e941c4af10759e0fabbf14b2d40e02d6d583c7c46e1f" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.961603 4881 scope.go:117] "RemoveContainer" containerID="2d6a0850800a94a9d4922f8e8409ba511bec0aef3e493489fa9a041d74b8f451" Jan 26 13:19:08 crc kubenswrapper[4881]: I0126 13:19:08.998190 4881 scope.go:117] "RemoveContainer" containerID="57fe0e95588c4d97e9e7afd422715ae6ded1a51d572900066896e4b47190e15a" Jan 26 13:19:09 crc kubenswrapper[4881]: I0126 13:19:09.390283 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dqjq"] Jan 26 13:19:09 crc kubenswrapper[4881]: I0126 13:19:09.390559 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4dqjq" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerName="registry-server" containerID="cri-o://c8ad72d6795b507c668e4ec4b84845d693f62d4d1734f51f326d5b9abb390479" gracePeriod=2 Jan 26 13:19:09 crc kubenswrapper[4881]: I0126 13:19:09.821194 4881 generic.go:334] "Generic (PLEG): container finished" podID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerID="c8ad72d6795b507c668e4ec4b84845d693f62d4d1734f51f326d5b9abb390479" exitCode=0 Jan 26 13:19:09 crc kubenswrapper[4881]: I0126 13:19:09.821394 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqjq" event={"ID":"fa570e7b-0e6f-4836-b306-8d56939032e5","Type":"ContainerDied","Data":"c8ad72d6795b507c668e4ec4b84845d693f62d4d1734f51f326d5b9abb390479"} Jan 26 13:19:09 crc kubenswrapper[4881]: I0126 13:19:09.821690 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqjq" event={"ID":"fa570e7b-0e6f-4836-b306-8d56939032e5","Type":"ContainerDied","Data":"15d6854b3ba442a701b559087070eac3e1f13634bdd79c384154335b4616a65a"} Jan 26 13:19:09 crc kubenswrapper[4881]: I0126 13:19:09.821714 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d6854b3ba442a701b559087070eac3e1f13634bdd79c384154335b4616a65a" Jan 26 13:19:09 crc kubenswrapper[4881]: I0126 13:19:09.908959 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.026032 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-utilities\") pod \"fa570e7b-0e6f-4836-b306-8d56939032e5\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.026080 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-catalog-content\") pod \"fa570e7b-0e6f-4836-b306-8d56939032e5\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.026107 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9td9d\" (UniqueName: \"kubernetes.io/projected/fa570e7b-0e6f-4836-b306-8d56939032e5-kube-api-access-9td9d\") pod \"fa570e7b-0e6f-4836-b306-8d56939032e5\" (UID: \"fa570e7b-0e6f-4836-b306-8d56939032e5\") " Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.027692 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-utilities" (OuterVolumeSpecName: "utilities") pod "fa570e7b-0e6f-4836-b306-8d56939032e5" (UID: "fa570e7b-0e6f-4836-b306-8d56939032e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.033792 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa570e7b-0e6f-4836-b306-8d56939032e5-kube-api-access-9td9d" (OuterVolumeSpecName: "kube-api-access-9td9d") pod "fa570e7b-0e6f-4836-b306-8d56939032e5" (UID: "fa570e7b-0e6f-4836-b306-8d56939032e5"). InnerVolumeSpecName "kube-api-access-9td9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.107150 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" path="/var/lib/kubelet/pods/1ec7e79e-fdfe-46af-9508-9953cf5b65c5/volumes" Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.109084 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" path="/var/lib/kubelet/pods/eccb61b2-41fd-49ac-ae31-44a4b1aceff0/volumes" Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.129452 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.129504 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9td9d\" (UniqueName: \"kubernetes.io/projected/fa570e7b-0e6f-4836-b306-8d56939032e5-kube-api-access-9td9d\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.141737 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa570e7b-0e6f-4836-b306-8d56939032e5" (UID: "fa570e7b-0e6f-4836-b306-8d56939032e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.233104 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa570e7b-0e6f-4836-b306-8d56939032e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.837951 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqjq" Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.904460 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dqjq"] Jan 26 13:19:10 crc kubenswrapper[4881]: I0126 13:19:10.917390 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4dqjq"] Jan 26 13:19:12 crc kubenswrapper[4881]: I0126 13:19:12.100165 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" path="/var/lib/kubelet/pods/fa570e7b-0e6f-4836-b306-8d56939032e5/volumes" Jan 26 13:19:17 crc kubenswrapper[4881]: I0126 13:19:17.082691 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:19:17 crc kubenswrapper[4881]: E0126 13:19:17.083775 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:19:29 crc kubenswrapper[4881]: I0126 13:19:29.083212 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:19:29 crc kubenswrapper[4881]: E0126 13:19:29.085330 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:19:41 crc kubenswrapper[4881]: I0126 13:19:41.113242 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:19:41 crc kubenswrapper[4881]: E0126 13:19:41.114217 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:19:55 crc kubenswrapper[4881]: I0126 13:19:55.083451 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:19:55 crc kubenswrapper[4881]: E0126 13:19:55.086624 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:20:09 crc kubenswrapper[4881]: I0126 13:20:09.082604 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:20:09 crc kubenswrapper[4881]: E0126 13:20:09.083384 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:20:20 crc kubenswrapper[4881]: I0126 13:20:20.083153 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:20:20 crc kubenswrapper[4881]: E0126 13:20:20.084064 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:20:34 crc kubenswrapper[4881]: I0126 13:20:34.083848 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:20:34 crc kubenswrapper[4881]: E0126 13:20:34.084932 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:20:48 crc kubenswrapper[4881]: I0126 13:20:48.089657 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:20:48 crc kubenswrapper[4881]: E0126 13:20:48.090922 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:21:00 crc kubenswrapper[4881]: I0126 13:21:00.083732 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:21:01 crc kubenswrapper[4881]: I0126 13:21:01.846234 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"d87f018b7cfe3113cc30adef9c11e46910fcf79249c73d7f009d41973a8ff085"} Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.784030 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7d5wq"] Jan 26 13:21:39 crc kubenswrapper[4881]: E0126 13:21:39.785556 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerName="extract-utilities" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.785572 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerName="extract-utilities" Jan 26 13:21:39 crc kubenswrapper[4881]: E0126 13:21:39.785598 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerName="registry-server" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.785605 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerName="registry-server" Jan 26 13:21:39 crc kubenswrapper[4881]: E0126 13:21:39.785682 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerName="extract-content" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.785689 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerName="extract-content" Jan 26 13:21:39 crc kubenswrapper[4881]: E0126 13:21:39.785704 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerName="extract-content" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.785709 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerName="extract-content" Jan 26 13:21:39 crc kubenswrapper[4881]: E0126 13:21:39.785818 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerName="extract-utilities" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.785838 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerName="extract-utilities" Jan 26 13:21:39 crc kubenswrapper[4881]: E0126 13:21:39.785876 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerName="registry-server" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.785884 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerName="registry-server" Jan 26 13:21:39 crc kubenswrapper[4881]: E0126 13:21:39.785923 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerName="registry-server" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.785934 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerName="registry-server" Jan 26 13:21:39 crc kubenswrapper[4881]: E0126 13:21:39.785946 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerName="extract-utilities" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.785952 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerName="extract-utilities" Jan 26 13:21:39 crc kubenswrapper[4881]: E0126 13:21:39.785968 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerName="extract-content" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.785976 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerName="extract-content" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.786435 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccb61b2-41fd-49ac-ae31-44a4b1aceff0" containerName="registry-server" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.786451 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa570e7b-0e6f-4836-b306-8d56939032e5" containerName="registry-server" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.786486 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec7e79e-fdfe-46af-9508-9953cf5b65c5" containerName="registry-server" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.789476 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.814303 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7d5wq"] Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.928681 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-catalog-content\") pod \"certified-operators-7d5wq\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.928762 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-utilities\") pod \"certified-operators-7d5wq\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:39 crc kubenswrapper[4881]: I0126 13:21:39.928875 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8hnb\" (UniqueName: \"kubernetes.io/projected/b24c559f-8957-4442-8112-874f88d715ec-kube-api-access-p8hnb\") pod \"certified-operators-7d5wq\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:40 crc kubenswrapper[4881]: I0126 13:21:40.030677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8hnb\" (UniqueName: \"kubernetes.io/projected/b24c559f-8957-4442-8112-874f88d715ec-kube-api-access-p8hnb\") pod \"certified-operators-7d5wq\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:40 crc kubenswrapper[4881]: I0126 13:21:40.030941 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-catalog-content\") pod \"certified-operators-7d5wq\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:40 crc kubenswrapper[4881]: I0126 13:21:40.031019 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-utilities\") pod \"certified-operators-7d5wq\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:40 crc kubenswrapper[4881]: I0126 13:21:40.031667 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-utilities\") pod \"certified-operators-7d5wq\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:40 crc kubenswrapper[4881]: I0126 13:21:40.031725 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-catalog-content\") pod \"certified-operators-7d5wq\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:40 crc kubenswrapper[4881]: I0126 13:21:40.052280 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8hnb\" (UniqueName: \"kubernetes.io/projected/b24c559f-8957-4442-8112-874f88d715ec-kube-api-access-p8hnb\") pod \"certified-operators-7d5wq\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:40 crc kubenswrapper[4881]: I0126 13:21:40.118645 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:40 crc kubenswrapper[4881]: I0126 13:21:40.644014 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7d5wq"] Jan 26 13:21:41 crc kubenswrapper[4881]: I0126 13:21:41.265982 4881 generic.go:334] "Generic (PLEG): container finished" podID="b24c559f-8957-4442-8112-874f88d715ec" containerID="26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1" exitCode=0 Jan 26 13:21:41 crc kubenswrapper[4881]: I0126 13:21:41.266103 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7d5wq" event={"ID":"b24c559f-8957-4442-8112-874f88d715ec","Type":"ContainerDied","Data":"26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1"} Jan 26 13:21:41 crc kubenswrapper[4881]: I0126 13:21:41.266220 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7d5wq" event={"ID":"b24c559f-8957-4442-8112-874f88d715ec","Type":"ContainerStarted","Data":"0488f736ffbffc03b05a7e2989881aa3b0e27364e36cb78bf909af6bc0b0c133"} Jan 26 13:21:41 crc kubenswrapper[4881]: I0126 13:21:41.269474 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:21:42 crc kubenswrapper[4881]: I0126 13:21:42.281961 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7d5wq" event={"ID":"b24c559f-8957-4442-8112-874f88d715ec","Type":"ContainerStarted","Data":"ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6"} Jan 26 13:21:43 crc kubenswrapper[4881]: I0126 13:21:43.296103 4881 generic.go:334] "Generic (PLEG): container finished" podID="b24c559f-8957-4442-8112-874f88d715ec" containerID="ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6" exitCode=0 Jan 26 13:21:43 crc kubenswrapper[4881]: I0126 13:21:43.296159 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7d5wq" event={"ID":"b24c559f-8957-4442-8112-874f88d715ec","Type":"ContainerDied","Data":"ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6"} Jan 26 13:21:45 crc kubenswrapper[4881]: I0126 13:21:45.316177 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7d5wq" event={"ID":"b24c559f-8957-4442-8112-874f88d715ec","Type":"ContainerStarted","Data":"a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18"} Jan 26 13:21:45 crc kubenswrapper[4881]: I0126 13:21:45.342977 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7d5wq" podStartSLOduration=3.451602358 podStartE2EDuration="6.342956396s" podCreationTimestamp="2026-01-26 13:21:39 +0000 UTC" firstStartedPulling="2026-01-26 13:21:41.269200609 +0000 UTC m=+2773.748510635" lastFinishedPulling="2026-01-26 13:21:44.160554637 +0000 UTC m=+2776.639864673" observedRunningTime="2026-01-26 13:21:45.333691001 +0000 UTC m=+2777.813001047" watchObservedRunningTime="2026-01-26 13:21:45.342956396 +0000 UTC m=+2777.822266422" Jan 26 13:21:50 crc kubenswrapper[4881]: I0126 13:21:50.119555 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:50 crc kubenswrapper[4881]: I0126 13:21:50.120119 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:50 crc kubenswrapper[4881]: I0126 13:21:50.179195 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:50 crc kubenswrapper[4881]: I0126 13:21:50.411830 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:50 crc kubenswrapper[4881]: I0126 13:21:50.459019 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7d5wq"] Jan 26 13:21:52 crc kubenswrapper[4881]: I0126 13:21:52.377164 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7d5wq" podUID="b24c559f-8957-4442-8112-874f88d715ec" containerName="registry-server" containerID="cri-o://a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18" gracePeriod=2 Jan 26 13:21:52 crc kubenswrapper[4881]: I0126 13:21:52.876189 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.000736 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-catalog-content\") pod \"b24c559f-8957-4442-8112-874f88d715ec\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.001160 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-utilities\") pod \"b24c559f-8957-4442-8112-874f88d715ec\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.001237 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8hnb\" (UniqueName: \"kubernetes.io/projected/b24c559f-8957-4442-8112-874f88d715ec-kube-api-access-p8hnb\") pod \"b24c559f-8957-4442-8112-874f88d715ec\" (UID: \"b24c559f-8957-4442-8112-874f88d715ec\") " Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.002053 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-utilities" (OuterVolumeSpecName: "utilities") pod "b24c559f-8957-4442-8112-874f88d715ec" (UID: "b24c559f-8957-4442-8112-874f88d715ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.002781 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.009946 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24c559f-8957-4442-8112-874f88d715ec-kube-api-access-p8hnb" (OuterVolumeSpecName: "kube-api-access-p8hnb") pod "b24c559f-8957-4442-8112-874f88d715ec" (UID: "b24c559f-8957-4442-8112-874f88d715ec"). InnerVolumeSpecName "kube-api-access-p8hnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.051423 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b24c559f-8957-4442-8112-874f88d715ec" (UID: "b24c559f-8957-4442-8112-874f88d715ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.105747 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8hnb\" (UniqueName: \"kubernetes.io/projected/b24c559f-8957-4442-8112-874f88d715ec-kube-api-access-p8hnb\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.105794 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24c559f-8957-4442-8112-874f88d715ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.390561 4881 generic.go:334] "Generic (PLEG): container finished" podID="b24c559f-8957-4442-8112-874f88d715ec" containerID="a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18" exitCode=0 Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.390601 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7d5wq" event={"ID":"b24c559f-8957-4442-8112-874f88d715ec","Type":"ContainerDied","Data":"a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18"} Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.390627 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7d5wq" event={"ID":"b24c559f-8957-4442-8112-874f88d715ec","Type":"ContainerDied","Data":"0488f736ffbffc03b05a7e2989881aa3b0e27364e36cb78bf909af6bc0b0c133"} Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.390643 4881 scope.go:117] "RemoveContainer" containerID="a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.390647 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7d5wq" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.424118 4881 scope.go:117] "RemoveContainer" containerID="ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.425389 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7d5wq"] Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.436630 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7d5wq"] Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.447389 4881 scope.go:117] "RemoveContainer" containerID="26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.509071 4881 scope.go:117] "RemoveContainer" containerID="a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18" Jan 26 13:21:53 crc kubenswrapper[4881]: E0126 13:21:53.511045 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18\": container with ID starting with a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18 not found: ID does not exist" containerID="a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.511082 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18"} err="failed to get container status \"a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18\": rpc error: code = NotFound desc = could not find container \"a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18\": container with ID starting with a8590ed703ba7800cb102c7e28328386b00ecb3806f75043bf3935cac1106a18 not found: ID does not exist" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.511109 4881 scope.go:117] "RemoveContainer" containerID="ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6" Jan 26 13:21:53 crc kubenswrapper[4881]: E0126 13:21:53.511549 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6\": container with ID starting with ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6 not found: ID does not exist" containerID="ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.511593 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6"} err="failed to get container status \"ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6\": rpc error: code = NotFound desc = could not find container \"ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6\": container with ID starting with ae615afe7b69464750d142c498e2ec13e538f8e24a31ac1dabd64b8517823ba6 not found: ID does not exist" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.511613 4881 scope.go:117] "RemoveContainer" containerID="26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1" Jan 26 13:21:53 crc kubenswrapper[4881]: E0126 13:21:53.511894 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1\": container with ID starting with 26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1 not found: ID does not exist" containerID="26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1" Jan 26 13:21:53 crc kubenswrapper[4881]: I0126 13:21:53.511940 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1"} err="failed to get container status \"26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1\": rpc error: code = NotFound desc = could not find container \"26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1\": container with ID starting with 26e64ca4966dc65721d1a6533aa7b7d9587f863457de5c627bc94023ded8b5c1 not found: ID does not exist" Jan 26 13:21:54 crc kubenswrapper[4881]: I0126 13:21:54.099724 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24c559f-8957-4442-8112-874f88d715ec" path="/var/lib/kubelet/pods/b24c559f-8957-4442-8112-874f88d715ec/volumes" Jan 26 13:22:40 crc kubenswrapper[4881]: I0126 13:22:40.845724 4881 generic.go:334] "Generic (PLEG): container finished" podID="2190fb1e-77a2-47d2-a0bb-2aaca7948653" containerID="b8899421ce0a78dfebee6ad8754283d5630b1539c01ac56c52fc665322afebf0" exitCode=0 Jan 26 13:22:40 crc kubenswrapper[4881]: I0126 13:22:40.845882 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" event={"ID":"2190fb1e-77a2-47d2-a0bb-2aaca7948653","Type":"ContainerDied","Data":"b8899421ce0a78dfebee6ad8754283d5630b1539c01ac56c52fc665322afebf0"} Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.411963 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.604759 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-combined-ca-bundle\") pod \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.605131 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-secret-0\") pod \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.605215 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-ssh-key-openstack-edpm-ipam\") pod \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.605380 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9l4n\" (UniqueName: \"kubernetes.io/projected/2190fb1e-77a2-47d2-a0bb-2aaca7948653-kube-api-access-n9l4n\") pod \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.605884 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-inventory\") pod \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\" (UID: \"2190fb1e-77a2-47d2-a0bb-2aaca7948653\") " Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.611156 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2190fb1e-77a2-47d2-a0bb-2aaca7948653" (UID: "2190fb1e-77a2-47d2-a0bb-2aaca7948653"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.611724 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2190fb1e-77a2-47d2-a0bb-2aaca7948653-kube-api-access-n9l4n" (OuterVolumeSpecName: "kube-api-access-n9l4n") pod "2190fb1e-77a2-47d2-a0bb-2aaca7948653" (UID: "2190fb1e-77a2-47d2-a0bb-2aaca7948653"). InnerVolumeSpecName "kube-api-access-n9l4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.633921 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-inventory" (OuterVolumeSpecName: "inventory") pod "2190fb1e-77a2-47d2-a0bb-2aaca7948653" (UID: "2190fb1e-77a2-47d2-a0bb-2aaca7948653"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.634450 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2190fb1e-77a2-47d2-a0bb-2aaca7948653" (UID: "2190fb1e-77a2-47d2-a0bb-2aaca7948653"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.640204 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "2190fb1e-77a2-47d2-a0bb-2aaca7948653" (UID: "2190fb1e-77a2-47d2-a0bb-2aaca7948653"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.708543 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.708584 4881 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.708600 4881 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.708613 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2190fb1e-77a2-47d2-a0bb-2aaca7948653-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.708625 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9l4n\" (UniqueName: \"kubernetes.io/projected/2190fb1e-77a2-47d2-a0bb-2aaca7948653-kube-api-access-n9l4n\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.871158 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" event={"ID":"2190fb1e-77a2-47d2-a0bb-2aaca7948653","Type":"ContainerDied","Data":"c97d2a66cee8d56461463c900872d02df11665b4ada28bdf571511d1f12179a6"} Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.871204 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c97d2a66cee8d56461463c900872d02df11665b4ada28bdf571511d1f12179a6" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.871267 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.977063 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht"] Jan 26 13:22:42 crc kubenswrapper[4881]: E0126 13:22:42.977455 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24c559f-8957-4442-8112-874f88d715ec" containerName="extract-content" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.977472 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24c559f-8957-4442-8112-874f88d715ec" containerName="extract-content" Jan 26 13:22:42 crc kubenswrapper[4881]: E0126 13:22:42.977489 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24c559f-8957-4442-8112-874f88d715ec" containerName="extract-utilities" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.977495 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24c559f-8957-4442-8112-874f88d715ec" containerName="extract-utilities" Jan 26 13:22:42 crc kubenswrapper[4881]: E0126 13:22:42.977538 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2190fb1e-77a2-47d2-a0bb-2aaca7948653" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.977546 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2190fb1e-77a2-47d2-a0bb-2aaca7948653" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 13:22:42 crc kubenswrapper[4881]: E0126 13:22:42.977564 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24c559f-8957-4442-8112-874f88d715ec" containerName="registry-server" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.977571 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24c559f-8957-4442-8112-874f88d715ec" containerName="registry-server" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.977730 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24c559f-8957-4442-8112-874f88d715ec" containerName="registry-server" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.977757 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2190fb1e-77a2-47d2-a0bb-2aaca7948653" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.978429 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.980997 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.981190 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:22:42 crc kubenswrapper[4881]: W0126 13:22:42.981291 4881 reflector.go:561] object-"openstack"/"nova-cell1-compute-config": failed to list *v1.Secret: secrets "nova-cell1-compute-config" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 13:22:42 crc kubenswrapper[4881]: E0126 13:22:42.981347 4881 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-compute-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell1-compute-config\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.981805 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:22:42 crc kubenswrapper[4881]: W0126 13:22:42.981835 4881 reflector.go:561] object-"openstack"/"nova-migration-ssh-key": failed to list *v1.Secret: secrets "nova-migration-ssh-key" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 13:22:42 crc kubenswrapper[4881]: I0126 13:22:42.981853 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:22:42 crc kubenswrapper[4881]: E0126 13:22:42.981879 4881 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-migration-ssh-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-migration-ssh-key\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 13:22:42 crc kubenswrapper[4881]: W0126 13:22:42.982040 4881 reflector.go:561] object-"openstack"/"nova-extra-config": failed to list *v1.ConfigMap: configmaps "nova-extra-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 13:22:42 crc kubenswrapper[4881]: E0126 13:22:42.982075 4881 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-extra-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"nova-extra-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.007198 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht"] Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.115621 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.115683 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.115712 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.115773 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.115806 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.116048 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.116187 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.116233 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.116285 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49bqn\" (UniqueName: \"kubernetes.io/projected/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-kube-api-access-49bqn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.218470 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.218936 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.218974 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.219013 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.219056 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49bqn\" (UniqueName: \"kubernetes.io/projected/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-kube-api-access-49bqn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.219263 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.219290 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.219317 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.219411 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.223890 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.224485 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.227957 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:43 crc kubenswrapper[4881]: I0126 13:22:43.238658 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49bqn\" (UniqueName: \"kubernetes.io/projected/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-kube-api-access-49bqn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.074875 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.081315 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.184614 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.196231 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.196245 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:44 crc kubenswrapper[4881]: E0126 13:22:44.219588 4881 secret.go:188] Couldn't get secret openstack/nova-cell1-compute-config: failed to sync secret cache: timed out waiting for the condition Jan 26 13:22:44 crc kubenswrapper[4881]: E0126 13:22:44.219696 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-1 podName:7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f nodeName:}" failed. No retries permitted until 2026-01-26 13:22:44.719674295 +0000 UTC m=+2837.198984321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nova-cell1-compute-config-1" (UniqueName: "kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-1") pod "nova-edpm-deployment-openstack-edpm-ipam-xpfht" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f") : failed to sync secret cache: timed out waiting for the condition Jan 26 13:22:44 crc kubenswrapper[4881]: E0126 13:22:44.219844 4881 secret.go:188] Couldn't get secret openstack/nova-cell1-compute-config: failed to sync secret cache: timed out waiting for the condition Jan 26 13:22:44 crc kubenswrapper[4881]: E0126 13:22:44.219976 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-0 podName:7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f nodeName:}" failed. No retries permitted until 2026-01-26 13:22:44.719952522 +0000 UTC m=+2837.199262588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nova-cell1-compute-config-0" (UniqueName: "kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-0") pod "nova-edpm-deployment-openstack-edpm-ipam-xpfht" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f") : failed to sync secret cache: timed out waiting for the condition Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.567217 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.750357 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.750875 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.754830 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.756010 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xpfht\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:44 crc kubenswrapper[4881]: I0126 13:22:44.794138 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:22:45 crc kubenswrapper[4881]: I0126 13:22:45.371134 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht"] Jan 26 13:22:45 crc kubenswrapper[4881]: I0126 13:22:45.926731 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" event={"ID":"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f","Type":"ContainerStarted","Data":"8a909abb5f64a7d9866e8cc360644c4ab7e64b3402a8f806c10a424b82530eb7"} Jan 26 13:22:46 crc kubenswrapper[4881]: I0126 13:22:46.935841 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" event={"ID":"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f","Type":"ContainerStarted","Data":"bd391f6cae08402074a679fbc90d2e13101acbeb7f1834d8c34859dde17b57ab"} Jan 26 13:23:24 crc kubenswrapper[4881]: I0126 13:23:24.790157 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:23:24 crc kubenswrapper[4881]: I0126 13:23:24.790624 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:23:54 crc kubenswrapper[4881]: I0126 13:23:54.789801 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:23:54 crc kubenswrapper[4881]: I0126 13:23:54.790511 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:24:24 crc kubenswrapper[4881]: I0126 13:24:24.789888 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:24:24 crc kubenswrapper[4881]: I0126 13:24:24.790319 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:24:24 crc kubenswrapper[4881]: I0126 13:24:24.790369 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 13:24:24 crc kubenswrapper[4881]: I0126 13:24:24.791189 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d87f018b7cfe3113cc30adef9c11e46910fcf79249c73d7f009d41973a8ff085"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:24:24 crc kubenswrapper[4881]: I0126 13:24:24.791258 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://d87f018b7cfe3113cc30adef9c11e46910fcf79249c73d7f009d41973a8ff085" gracePeriod=600 Jan 26 13:24:24 crc kubenswrapper[4881]: I0126 13:24:24.918151 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="d87f018b7cfe3113cc30adef9c11e46910fcf79249c73d7f009d41973a8ff085" exitCode=0 Jan 26 13:24:24 crc kubenswrapper[4881]: I0126 13:24:24.918229 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"d87f018b7cfe3113cc30adef9c11e46910fcf79249c73d7f009d41973a8ff085"} Jan 26 13:24:24 crc kubenswrapper[4881]: I0126 13:24:24.918645 4881 scope.go:117] "RemoveContainer" containerID="cfbf2717bd5606e33fa4a304b006f0f82f182052a66313b15f98a11bb12d2b4c" Jan 26 13:24:25 crc kubenswrapper[4881]: I0126 13:24:25.930203 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190"} Jan 26 13:24:25 crc kubenswrapper[4881]: I0126 13:24:25.952884 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" podStartSLOduration=103.538684488 podStartE2EDuration="1m43.952860264s" podCreationTimestamp="2026-01-26 13:22:42 +0000 UTC" firstStartedPulling="2026-01-26 13:22:45.394165961 +0000 UTC m=+2837.873475987" lastFinishedPulling="2026-01-26 13:22:45.808341697 +0000 UTC m=+2838.287651763" observedRunningTime="2026-01-26 13:22:47.015112817 +0000 UTC m=+2839.494422843" watchObservedRunningTime="2026-01-26 13:24:25.952860264 +0000 UTC m=+2938.432170300" Jan 26 13:24:55 crc kubenswrapper[4881]: I0126 13:24:55.672673 4881 scope.go:117] "RemoveContainer" containerID="c8ad72d6795b507c668e4ec4b84845d693f62d4d1734f51f326d5b9abb390479" Jan 26 13:24:55 crc kubenswrapper[4881]: I0126 13:24:55.695939 4881 scope.go:117] "RemoveContainer" containerID="b234f5d090d116aaafcf991964179ff25c52532c05fee381acd2cf9edc35bcb2" Jan 26 13:24:55 crc kubenswrapper[4881]: I0126 13:24:55.718692 4881 scope.go:117] "RemoveContainer" containerID="be2708c1ad6be3c317bb626816104a53434f8e81f3f87b765e1e9192915d6101" Jan 26 13:25:24 crc kubenswrapper[4881]: I0126 13:25:24.549087 4881 generic.go:334] "Generic (PLEG): container finished" podID="7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" containerID="bd391f6cae08402074a679fbc90d2e13101acbeb7f1834d8c34859dde17b57ab" exitCode=0 Jan 26 13:25:24 crc kubenswrapper[4881]: I0126 13:25:24.549197 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" event={"ID":"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f","Type":"ContainerDied","Data":"bd391f6cae08402074a679fbc90d2e13101acbeb7f1834d8c34859dde17b57ab"} Jan 26 13:25:25 crc kubenswrapper[4881]: I0126 13:25:25.950579 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.027205 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-extra-config-0\") pod \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.027330 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-0\") pod \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.027374 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-ssh-key-openstack-edpm-ipam\") pod \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.027512 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-0\") pod \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.027676 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49bqn\" (UniqueName: \"kubernetes.io/projected/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-kube-api-access-49bqn\") pod \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.027743 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-inventory\") pod \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.027833 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-1\") pod \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.027958 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-1\") pod \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.028048 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-combined-ca-bundle\") pod \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\" (UID: \"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f\") " Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.033474 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-kube-api-access-49bqn" (OuterVolumeSpecName: "kube-api-access-49bqn") pod "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f"). InnerVolumeSpecName "kube-api-access-49bqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.038904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.065428 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.068236 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.068585 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.070605 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.072199 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.072773 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.093803 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-inventory" (OuterVolumeSpecName: "inventory") pod "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" (UID: "7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.131004 4881 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.131026 4881 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.131036 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.131044 4881 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.131052 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49bqn\" (UniqueName: \"kubernetes.io/projected/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-kube-api-access-49bqn\") on node \"crc\" DevicePath \"\"" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.131061 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.131070 4881 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.131080 4881 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.131088 4881 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.568098 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" event={"ID":"7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f","Type":"ContainerDied","Data":"8a909abb5f64a7d9866e8cc360644c4ab7e64b3402a8f806c10a424b82530eb7"} Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.568141 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a909abb5f64a7d9866e8cc360644c4ab7e64b3402a8f806c10a424b82530eb7" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.568163 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xpfht" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.669272 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w"] Jan 26 13:25:26 crc kubenswrapper[4881]: E0126 13:25:26.669709 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.669726 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.669920 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.670563 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.672807 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.673015 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.673325 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.673459 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.683897 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2krn6" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.683938 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w"] Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.742196 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.742284 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.742332 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8rpb\" (UniqueName: \"kubernetes.io/projected/1062b2c8-e4fb-4999-aecf-a04dd5157826-kube-api-access-m8rpb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.742353 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.742379 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.742404 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.742446 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.844358 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.844463 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.844587 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.844652 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8rpb\" (UniqueName: \"kubernetes.io/projected/1062b2c8-e4fb-4999-aecf-a04dd5157826-kube-api-access-m8rpb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.844685 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.844721 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.844757 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.849399 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.849570 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.849866 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.851681 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.855241 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.855769 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.862896 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8rpb\" (UniqueName: \"kubernetes.io/projected/1062b2c8-e4fb-4999-aecf-a04dd5157826-kube-api-access-m8rpb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:26 crc kubenswrapper[4881]: I0126 13:25:26.992926 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:25:27 crc kubenswrapper[4881]: I0126 13:25:27.567546 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w"] Jan 26 13:25:27 crc kubenswrapper[4881]: I0126 13:25:27.579305 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" event={"ID":"1062b2c8-e4fb-4999-aecf-a04dd5157826","Type":"ContainerStarted","Data":"a2745afa6ebe03bf93abab54b42575d16b34fd511bdcf478015d2adb879f247e"} Jan 26 13:25:29 crc kubenswrapper[4881]: I0126 13:25:29.167586 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:25:29 crc kubenswrapper[4881]: I0126 13:25:29.596275 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" event={"ID":"1062b2c8-e4fb-4999-aecf-a04dd5157826","Type":"ContainerStarted","Data":"ea85a968cb8c223782eb648dc5eb19c6f065837628364ac69d6e53534ed586a8"} Jan 26 13:25:30 crc kubenswrapper[4881]: I0126 13:25:30.634115 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" podStartSLOduration=3.037589986 podStartE2EDuration="4.634093498s" podCreationTimestamp="2026-01-26 13:25:26 +0000 UTC" firstStartedPulling="2026-01-26 13:25:27.568649868 +0000 UTC m=+3000.047959894" lastFinishedPulling="2026-01-26 13:25:29.16515336 +0000 UTC m=+3001.644463406" observedRunningTime="2026-01-26 13:25:30.623070163 +0000 UTC m=+3003.102380199" watchObservedRunningTime="2026-01-26 13:25:30.634093498 +0000 UTC m=+3003.113403534" Jan 26 13:26:54 crc kubenswrapper[4881]: I0126 13:26:54.789816 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:26:54 crc kubenswrapper[4881]: I0126 13:26:54.790368 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:27:24 crc kubenswrapper[4881]: I0126 13:27:24.789033 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:27:24 crc kubenswrapper[4881]: I0126 13:27:24.789764 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:27:46 crc kubenswrapper[4881]: I0126 13:27:46.032310 4881 generic.go:334] "Generic (PLEG): container finished" podID="1062b2c8-e4fb-4999-aecf-a04dd5157826" containerID="ea85a968cb8c223782eb648dc5eb19c6f065837628364ac69d6e53534ed586a8" exitCode=0 Jan 26 13:27:46 crc kubenswrapper[4881]: I0126 13:27:46.032397 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" event={"ID":"1062b2c8-e4fb-4999-aecf-a04dd5157826","Type":"ContainerDied","Data":"ea85a968cb8c223782eb648dc5eb19c6f065837628364ac69d6e53534ed586a8"} Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.489386 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.637361 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-telemetry-combined-ca-bundle\") pod \"1062b2c8-e4fb-4999-aecf-a04dd5157826\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.637482 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-inventory\") pod \"1062b2c8-e4fb-4999-aecf-a04dd5157826\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.637614 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ssh-key-openstack-edpm-ipam\") pod \"1062b2c8-e4fb-4999-aecf-a04dd5157826\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.637676 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8rpb\" (UniqueName: \"kubernetes.io/projected/1062b2c8-e4fb-4999-aecf-a04dd5157826-kube-api-access-m8rpb\") pod \"1062b2c8-e4fb-4999-aecf-a04dd5157826\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.637735 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-0\") pod \"1062b2c8-e4fb-4999-aecf-a04dd5157826\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.637808 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-1\") pod \"1062b2c8-e4fb-4999-aecf-a04dd5157826\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.637903 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-2\") pod \"1062b2c8-e4fb-4999-aecf-a04dd5157826\" (UID: \"1062b2c8-e4fb-4999-aecf-a04dd5157826\") " Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.646140 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1062b2c8-e4fb-4999-aecf-a04dd5157826" (UID: "1062b2c8-e4fb-4999-aecf-a04dd5157826"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.646988 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1062b2c8-e4fb-4999-aecf-a04dd5157826-kube-api-access-m8rpb" (OuterVolumeSpecName: "kube-api-access-m8rpb") pod "1062b2c8-e4fb-4999-aecf-a04dd5157826" (UID: "1062b2c8-e4fb-4999-aecf-a04dd5157826"). InnerVolumeSpecName "kube-api-access-m8rpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.679665 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-inventory" (OuterVolumeSpecName: "inventory") pod "1062b2c8-e4fb-4999-aecf-a04dd5157826" (UID: "1062b2c8-e4fb-4999-aecf-a04dd5157826"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.687501 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "1062b2c8-e4fb-4999-aecf-a04dd5157826" (UID: "1062b2c8-e4fb-4999-aecf-a04dd5157826"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.695747 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1062b2c8-e4fb-4999-aecf-a04dd5157826" (UID: "1062b2c8-e4fb-4999-aecf-a04dd5157826"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.701602 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "1062b2c8-e4fb-4999-aecf-a04dd5157826" (UID: "1062b2c8-e4fb-4999-aecf-a04dd5157826"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.714476 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "1062b2c8-e4fb-4999-aecf-a04dd5157826" (UID: "1062b2c8-e4fb-4999-aecf-a04dd5157826"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.740666 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.740714 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.740733 4881 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.740752 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.740767 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.740783 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8rpb\" (UniqueName: \"kubernetes.io/projected/1062b2c8-e4fb-4999-aecf-a04dd5157826-kube-api-access-m8rpb\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:47 crc kubenswrapper[4881]: I0126 13:27:47.740798 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1062b2c8-e4fb-4999-aecf-a04dd5157826-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:48 crc kubenswrapper[4881]: I0126 13:27:48.054871 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" event={"ID":"1062b2c8-e4fb-4999-aecf-a04dd5157826","Type":"ContainerDied","Data":"a2745afa6ebe03bf93abab54b42575d16b34fd511bdcf478015d2adb879f247e"} Jan 26 13:27:48 crc kubenswrapper[4881]: I0126 13:27:48.055221 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2745afa6ebe03bf93abab54b42575d16b34fd511bdcf478015d2adb879f247e" Jan 26 13:27:48 crc kubenswrapper[4881]: I0126 13:27:48.054969 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w" Jan 26 13:27:54 crc kubenswrapper[4881]: I0126 13:27:54.789279 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:27:54 crc kubenswrapper[4881]: I0126 13:27:54.789776 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:27:54 crc kubenswrapper[4881]: I0126 13:27:54.789820 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 13:27:54 crc kubenswrapper[4881]: I0126 13:27:54.790483 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:27:54 crc kubenswrapper[4881]: I0126 13:27:54.790586 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" gracePeriod=600 Jan 26 13:27:55 crc kubenswrapper[4881]: I0126 13:27:55.137473 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" exitCode=0 Jan 26 13:27:55 crc kubenswrapper[4881]: I0126 13:27:55.137595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190"} Jan 26 13:27:55 crc kubenswrapper[4881]: I0126 13:27:55.137972 4881 scope.go:117] "RemoveContainer" containerID="d87f018b7cfe3113cc30adef9c11e46910fcf79249c73d7f009d41973a8ff085" Jan 26 13:27:55 crc kubenswrapper[4881]: E0126 13:27:55.155343 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:27:56 crc kubenswrapper[4881]: I0126 13:27:56.154035 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:27:56 crc kubenswrapper[4881]: E0126 13:27:56.156645 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:28:12 crc kubenswrapper[4881]: I0126 13:28:12.083139 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:28:12 crc kubenswrapper[4881]: E0126 13:28:12.083912 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:28:23 crc kubenswrapper[4881]: I0126 13:28:23.082903 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:28:23 crc kubenswrapper[4881]: E0126 13:28:23.083778 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.048059 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 13:28:24 crc kubenswrapper[4881]: E0126 13:28:24.048557 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1062b2c8-e4fb-4999-aecf-a04dd5157826" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.048581 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1062b2c8-e4fb-4999-aecf-a04dd5157826" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.048803 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1062b2c8-e4fb-4999-aecf-a04dd5157826" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.050024 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.053159 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.059819 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjzrm\" (UniqueName: \"kubernetes.io/projected/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-kube-api-access-kjzrm\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060142 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060183 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060215 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060255 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060319 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060348 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-sys\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060449 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-dev\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060479 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060529 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060612 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-run\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060629 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060650 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.060701 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.075861 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.110746 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.121098 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.149631 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.169590 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjzrm\" (UniqueName: \"kubernetes.io/projected/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-kube-api-access-kjzrm\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.169695 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbxdz\" (UniqueName: \"kubernetes.io/projected/77b81e80-c2b2-418e-b722-2f6ffa1b7103-kube-api-access-tbxdz\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.169730 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.169819 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.169892 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.169929 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.169979 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170002 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170035 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170055 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170099 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170122 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170170 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170197 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-sys\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170217 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170272 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170296 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170375 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170422 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170454 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-dev\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170492 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170537 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170581 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170608 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170652 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170718 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-run\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170739 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170782 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170815 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.170849 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.175879 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.175938 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.177291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.178749 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-dev\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.178854 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.178967 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-run\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.184668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-sys\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.184715 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.184873 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.185318 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.192683 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.193998 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.194351 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.196064 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.196402 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.198012 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.200487 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.207598 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.222906 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.255793 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjzrm\" (UniqueName: \"kubernetes.io/projected/7fe1aa7f-b1bd-4777-934a-76e8ba531b1b-kube-api-access-kjzrm\") pod \"cinder-volume-nfs-0\" (UID: \"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272288 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272306 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272355 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272403 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbxdz\" (UniqueName: \"kubernetes.io/projected/77b81e80-c2b2-418e-b722-2f6ffa1b7103-kube-api-access-tbxdz\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272447 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272486 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272501 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272535 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272559 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272586 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272604 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272634 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.272658 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.274025 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.274070 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.274092 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.274113 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.275625 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.277594 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.277650 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.277679 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.277709 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.277732 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/77b81e80-c2b2-418e-b722-2f6ffa1b7103-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.306628 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.307180 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.307184 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.308754 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b81e80-c2b2-418e-b722-2f6ffa1b7103-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.310281 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbxdz\" (UniqueName: \"kubernetes.io/projected/77b81e80-c2b2-418e-b722-2f6ffa1b7103-kube-api-access-tbxdz\") pod \"cinder-volume-nfs-2-0\" (UID: \"77b81e80-c2b2-418e-b722-2f6ffa1b7103\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.367996 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.375816 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.375882 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-dev\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.375941 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.375982 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-config-data\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376005 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376028 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-sys\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376055 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376074 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-scripts\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376104 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-lib-modules\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376136 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-run\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376173 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376202 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wzj6\" (UniqueName: \"kubernetes.io/projected/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-kube-api-access-4wzj6\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376246 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376264 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.376286 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.488799 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.488889 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wzj6\" (UniqueName: \"kubernetes.io/projected/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-kube-api-access-4wzj6\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.488947 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.488969 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.488997 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489032 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489063 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-dev\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489109 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489143 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-config-data\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489167 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489197 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-sys\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489225 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489247 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-scripts\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489273 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-lib-modules\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489308 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-run\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489432 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-run\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.489498 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.490424 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.491333 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.491447 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.491658 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-dev\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.491752 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.491810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.491835 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-sys\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.491859 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.501444 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-lib-modules\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.508036 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.508244 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-config-data\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.510966 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-scripts\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.513336 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.534122 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wzj6\" (UniqueName: \"kubernetes.io/projected/4b9b969a-e1a3-4253-bffc-34fe8db0a2ff-kube-api-access-4wzj6\") pod \"cinder-backup-0\" (UID: \"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff\") " pod="openstack/cinder-backup-0" Jan 26 13:28:24 crc kubenswrapper[4881]: I0126 13:28:24.761430 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 13:28:25 crc kubenswrapper[4881]: I0126 13:28:25.163836 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 13:28:25 crc kubenswrapper[4881]: I0126 13:28:25.174880 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:28:25 crc kubenswrapper[4881]: I0126 13:28:25.417992 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 13:28:25 crc kubenswrapper[4881]: I0126 13:28:25.483154 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b","Type":"ContainerStarted","Data":"bb6cc47584155a264bbc486d11e4c441bbe0f08edbb01fe221e7232a68bde446"} Jan 26 13:28:25 crc kubenswrapper[4881]: I0126 13:28:25.893038 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 13:28:26 crc kubenswrapper[4881]: I0126 13:28:26.503682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"77b81e80-c2b2-418e-b722-2f6ffa1b7103","Type":"ContainerStarted","Data":"7542db198d28852f8c496383e186d41ba078bb7d8304be6a66b68ba6b288d01a"} Jan 26 13:28:26 crc kubenswrapper[4881]: I0126 13:28:26.524918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff","Type":"ContainerStarted","Data":"728266efbb4378eff6923fc865bf7273e46a324f86408dd5c17834e6fa844a3a"} Jan 26 13:28:27 crc kubenswrapper[4881]: I0126 13:28:27.534201 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b","Type":"ContainerStarted","Data":"7b2ed7f35f613128e4fe0b312209496d08856bb457a4cafe8f84bfde82a13a6e"} Jan 26 13:28:27 crc kubenswrapper[4881]: I0126 13:28:27.536625 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"7fe1aa7f-b1bd-4777-934a-76e8ba531b1b","Type":"ContainerStarted","Data":"17af9805e65b0c0bf742ae7c9b7586b5769480a56037c1651936de00efc4cad6"} Jan 26 13:28:27 crc kubenswrapper[4881]: I0126 13:28:27.537669 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff","Type":"ContainerStarted","Data":"f62b71e6c404f81741455fa9d75fa49724bb9d618084562f43b15c6c012f91ed"} Jan 26 13:28:27 crc kubenswrapper[4881]: I0126 13:28:27.537694 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4b9b969a-e1a3-4253-bffc-34fe8db0a2ff","Type":"ContainerStarted","Data":"e00e4e0afc51750d8ffff90c5dec0bf2dcd902b8b5859c3188177c1d8cdbbc80"} Jan 26 13:28:27 crc kubenswrapper[4881]: I0126 13:28:27.540501 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"77b81e80-c2b2-418e-b722-2f6ffa1b7103","Type":"ContainerStarted","Data":"42d43fc13b0b9f65708a8dc70e513489c7047fc010d5127b01a3ee655de8fe94"} Jan 26 13:28:27 crc kubenswrapper[4881]: I0126 13:28:27.540573 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"77b81e80-c2b2-418e-b722-2f6ffa1b7103","Type":"ContainerStarted","Data":"f4c1fe7ff16658a90f96c55067cd3a136d529beb4dcb72af0cbf2eef6e03a6df"} Jan 26 13:28:27 crc kubenswrapper[4881]: I0126 13:28:27.577545 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.66249086 podStartE2EDuration="3.57752777s" podCreationTimestamp="2026-01-26 13:28:24 +0000 UTC" firstStartedPulling="2026-01-26 13:28:25.174685172 +0000 UTC m=+3177.653995198" lastFinishedPulling="2026-01-26 13:28:26.089722082 +0000 UTC m=+3178.569032108" observedRunningTime="2026-01-26 13:28:27.560751276 +0000 UTC m=+3180.040061302" watchObservedRunningTime="2026-01-26 13:28:27.57752777 +0000 UTC m=+3180.056837796" Jan 26 13:28:27 crc kubenswrapper[4881]: I0126 13:28:27.591943 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=3.591922586 podStartE2EDuration="3.591922586s" podCreationTimestamp="2026-01-26 13:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:28:27.590327958 +0000 UTC m=+3180.069638004" watchObservedRunningTime="2026-01-26 13:28:27.591922586 +0000 UTC m=+3180.071232612" Jan 26 13:28:27 crc kubenswrapper[4881]: I0126 13:28:27.612291 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.992911166 podStartE2EDuration="3.612271266s" podCreationTimestamp="2026-01-26 13:28:24 +0000 UTC" firstStartedPulling="2026-01-26 13:28:25.521508402 +0000 UTC m=+3178.000818428" lastFinishedPulling="2026-01-26 13:28:26.140868502 +0000 UTC m=+3178.620178528" observedRunningTime="2026-01-26 13:28:27.610387931 +0000 UTC m=+3180.089697957" watchObservedRunningTime="2026-01-26 13:28:27.612271266 +0000 UTC m=+3180.091581292" Jan 26 13:28:29 crc kubenswrapper[4881]: I0126 13:28:29.368726 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:29 crc kubenswrapper[4881]: I0126 13:28:29.491643 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:29 crc kubenswrapper[4881]: I0126 13:28:29.762594 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 26 13:28:34 crc kubenswrapper[4881]: I0126 13:28:34.625877 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 26 13:28:34 crc kubenswrapper[4881]: I0126 13:28:34.682470 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:28:34 crc kubenswrapper[4881]: I0126 13:28:34.928218 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 26 13:28:36 crc kubenswrapper[4881]: I0126 13:28:36.090771 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:28:36 crc kubenswrapper[4881]: E0126 13:28:36.091531 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.742778 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-thrmb"] Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.746009 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.754743 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thrmb"] Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.837899 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-utilities\") pod \"redhat-operators-thrmb\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.837989 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj8sq\" (UniqueName: \"kubernetes.io/projected/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-kube-api-access-tj8sq\") pod \"redhat-operators-thrmb\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.838115 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-catalog-content\") pod \"redhat-operators-thrmb\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.940577 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-catalog-content\") pod \"redhat-operators-thrmb\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.940667 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-utilities\") pod \"redhat-operators-thrmb\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.940767 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj8sq\" (UniqueName: \"kubernetes.io/projected/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-kube-api-access-tj8sq\") pod \"redhat-operators-thrmb\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.941090 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-catalog-content\") pod \"redhat-operators-thrmb\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.941343 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-utilities\") pod \"redhat-operators-thrmb\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:39 crc kubenswrapper[4881]: I0126 13:28:39.972270 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj8sq\" (UniqueName: \"kubernetes.io/projected/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-kube-api-access-tj8sq\") pod \"redhat-operators-thrmb\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:40 crc kubenswrapper[4881]: I0126 13:28:40.077736 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:40 crc kubenswrapper[4881]: I0126 13:28:40.581012 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thrmb"] Jan 26 13:28:40 crc kubenswrapper[4881]: I0126 13:28:40.711046 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thrmb" event={"ID":"258a44fa-0886-4ab1-a3fd-89c0f2857fe2","Type":"ContainerStarted","Data":"aea543b2a6c5f3808cf2cd75c59b9f179258307a3fe1314ae20299f1e2f76317"} Jan 26 13:28:41 crc kubenswrapper[4881]: I0126 13:28:41.723771 4881 generic.go:334] "Generic (PLEG): container finished" podID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerID="e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67" exitCode=0 Jan 26 13:28:41 crc kubenswrapper[4881]: I0126 13:28:41.723857 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thrmb" event={"ID":"258a44fa-0886-4ab1-a3fd-89c0f2857fe2","Type":"ContainerDied","Data":"e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67"} Jan 26 13:28:42 crc kubenswrapper[4881]: I0126 13:28:42.735092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thrmb" event={"ID":"258a44fa-0886-4ab1-a3fd-89c0f2857fe2","Type":"ContainerStarted","Data":"d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9"} Jan 26 13:28:46 crc kubenswrapper[4881]: E0126 13:28:46.223930 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod258a44fa_0886_4ab1_a3fd_89c0f2857fe2.slice/crio-d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod258a44fa_0886_4ab1_a3fd_89c0f2857fe2.slice/crio-conmon-d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9.scope\": RecentStats: unable to find data in memory cache]" Jan 26 13:28:46 crc kubenswrapper[4881]: I0126 13:28:46.780636 4881 generic.go:334] "Generic (PLEG): container finished" podID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerID="d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9" exitCode=0 Jan 26 13:28:46 crc kubenswrapper[4881]: I0126 13:28:46.780699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thrmb" event={"ID":"258a44fa-0886-4ab1-a3fd-89c0f2857fe2","Type":"ContainerDied","Data":"d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9"} Jan 26 13:28:47 crc kubenswrapper[4881]: I0126 13:28:47.091897 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:28:47 crc kubenswrapper[4881]: E0126 13:28:47.092110 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:28:48 crc kubenswrapper[4881]: I0126 13:28:48.804174 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thrmb" event={"ID":"258a44fa-0886-4ab1-a3fd-89c0f2857fe2","Type":"ContainerStarted","Data":"b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d"} Jan 26 13:28:50 crc kubenswrapper[4881]: I0126 13:28:50.078497 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:50 crc kubenswrapper[4881]: I0126 13:28:50.078566 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:28:51 crc kubenswrapper[4881]: I0126 13:28:51.123554 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-thrmb" podUID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerName="registry-server" probeResult="failure" output=< Jan 26 13:28:51 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 13:28:51 crc kubenswrapper[4881]: > Jan 26 13:28:59 crc kubenswrapper[4881]: I0126 13:28:59.082775 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:28:59 crc kubenswrapper[4881]: E0126 13:28:59.083533 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:29:00 crc kubenswrapper[4881]: I0126 13:29:00.140054 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:29:00 crc kubenswrapper[4881]: I0126 13:29:00.180559 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-thrmb" podStartSLOduration=15.739609378 podStartE2EDuration="21.180510074s" podCreationTimestamp="2026-01-26 13:28:39 +0000 UTC" firstStartedPulling="2026-01-26 13:28:41.727212783 +0000 UTC m=+3194.206522819" lastFinishedPulling="2026-01-26 13:28:47.168113479 +0000 UTC m=+3199.647423515" observedRunningTime="2026-01-26 13:28:48.83754005 +0000 UTC m=+3201.316850106" watchObservedRunningTime="2026-01-26 13:29:00.180510074 +0000 UTC m=+3212.659820120" Jan 26 13:29:00 crc kubenswrapper[4881]: I0126 13:29:00.215916 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:29:00 crc kubenswrapper[4881]: I0126 13:29:00.384586 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thrmb"] Jan 26 13:29:01 crc kubenswrapper[4881]: I0126 13:29:01.952159 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-thrmb" podUID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerName="registry-server" containerID="cri-o://b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d" gracePeriod=2 Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.460508 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.581787 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-utilities\") pod \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.581867 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj8sq\" (UniqueName: \"kubernetes.io/projected/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-kube-api-access-tj8sq\") pod \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.581900 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-catalog-content\") pod \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\" (UID: \"258a44fa-0886-4ab1-a3fd-89c0f2857fe2\") " Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.582430 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-utilities" (OuterVolumeSpecName: "utilities") pod "258a44fa-0886-4ab1-a3fd-89c0f2857fe2" (UID: "258a44fa-0886-4ab1-a3fd-89c0f2857fe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.582651 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.590810 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-kube-api-access-tj8sq" (OuterVolumeSpecName: "kube-api-access-tj8sq") pod "258a44fa-0886-4ab1-a3fd-89c0f2857fe2" (UID: "258a44fa-0886-4ab1-a3fd-89c0f2857fe2"). InnerVolumeSpecName "kube-api-access-tj8sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.684210 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj8sq\" (UniqueName: \"kubernetes.io/projected/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-kube-api-access-tj8sq\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.702503 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "258a44fa-0886-4ab1-a3fd-89c0f2857fe2" (UID: "258a44fa-0886-4ab1-a3fd-89c0f2857fe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.785858 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258a44fa-0886-4ab1-a3fd-89c0f2857fe2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.964862 4881 generic.go:334] "Generic (PLEG): container finished" podID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerID="b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d" exitCode=0 Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.964953 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thrmb" event={"ID":"258a44fa-0886-4ab1-a3fd-89c0f2857fe2","Type":"ContainerDied","Data":"b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d"} Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.964983 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thrmb" Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.965006 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thrmb" event={"ID":"258a44fa-0886-4ab1-a3fd-89c0f2857fe2","Type":"ContainerDied","Data":"aea543b2a6c5f3808cf2cd75c59b9f179258307a3fe1314ae20299f1e2f76317"} Jan 26 13:29:02 crc kubenswrapper[4881]: I0126 13:29:02.965037 4881 scope.go:117] "RemoveContainer" containerID="b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d" Jan 26 13:29:03 crc kubenswrapper[4881]: I0126 13:29:03.005324 4881 scope.go:117] "RemoveContainer" containerID="d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9" Jan 26 13:29:03 crc kubenswrapper[4881]: I0126 13:29:03.016020 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thrmb"] Jan 26 13:29:03 crc kubenswrapper[4881]: I0126 13:29:03.031132 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-thrmb"] Jan 26 13:29:03 crc kubenswrapper[4881]: I0126 13:29:03.047274 4881 scope.go:117] "RemoveContainer" containerID="e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67" Jan 26 13:29:03 crc kubenswrapper[4881]: I0126 13:29:03.114551 4881 scope.go:117] "RemoveContainer" containerID="b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d" Jan 26 13:29:03 crc kubenswrapper[4881]: E0126 13:29:03.115705 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d\": container with ID starting with b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d not found: ID does not exist" containerID="b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d" Jan 26 13:29:03 crc kubenswrapper[4881]: I0126 13:29:03.115764 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d"} err="failed to get container status \"b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d\": rpc error: code = NotFound desc = could not find container \"b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d\": container with ID starting with b996bfe7c1ae0efdbef14fb8407170dd0198925da1ae1c323ae378d691f38c5d not found: ID does not exist" Jan 26 13:29:03 crc kubenswrapper[4881]: I0126 13:29:03.115796 4881 scope.go:117] "RemoveContainer" containerID="d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9" Jan 26 13:29:03 crc kubenswrapper[4881]: E0126 13:29:03.116404 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9\": container with ID starting with d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9 not found: ID does not exist" containerID="d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9" Jan 26 13:29:03 crc kubenswrapper[4881]: I0126 13:29:03.116454 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9"} err="failed to get container status \"d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9\": rpc error: code = NotFound desc = could not find container \"d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9\": container with ID starting with d98db7068f1a611c43f2067575904bfe964490bd59c4ecd0174ecc9b96a3b4b9 not found: ID does not exist" Jan 26 13:29:03 crc kubenswrapper[4881]: I0126 13:29:03.116489 4881 scope.go:117] "RemoveContainer" containerID="e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67" Jan 26 13:29:03 crc kubenswrapper[4881]: E0126 13:29:03.117000 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67\": container with ID starting with e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67 not found: ID does not exist" containerID="e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67" Jan 26 13:29:03 crc kubenswrapper[4881]: I0126 13:29:03.117054 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67"} err="failed to get container status \"e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67\": rpc error: code = NotFound desc = could not find container \"e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67\": container with ID starting with e5de264a5bc15476638f03e73fefc11d7997dece79a5bbca2a3e20aef8f7dd67 not found: ID does not exist" Jan 26 13:29:04 crc kubenswrapper[4881]: I0126 13:29:04.094662 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" path="/var/lib/kubelet/pods/258a44fa-0886-4ab1-a3fd-89c0f2857fe2/volumes" Jan 26 13:29:12 crc kubenswrapper[4881]: I0126 13:29:12.082641 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:29:12 crc kubenswrapper[4881]: E0126 13:29:12.083983 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:29:22 crc kubenswrapper[4881]: I0126 13:29:22.984527 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6g8vf"] Jan 26 13:29:22 crc kubenswrapper[4881]: E0126 13:29:22.985358 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerName="extract-content" Jan 26 13:29:22 crc kubenswrapper[4881]: I0126 13:29:22.985370 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerName="extract-content" Jan 26 13:29:22 crc kubenswrapper[4881]: E0126 13:29:22.985387 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerName="registry-server" Jan 26 13:29:22 crc kubenswrapper[4881]: I0126 13:29:22.985394 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerName="registry-server" Jan 26 13:29:22 crc kubenswrapper[4881]: E0126 13:29:22.985419 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerName="extract-utilities" Jan 26 13:29:22 crc kubenswrapper[4881]: I0126 13:29:22.985426 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerName="extract-utilities" Jan 26 13:29:22 crc kubenswrapper[4881]: I0126 13:29:22.985623 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="258a44fa-0886-4ab1-a3fd-89c0f2857fe2" containerName="registry-server" Jan 26 13:29:22 crc kubenswrapper[4881]: I0126 13:29:22.987284 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.005251 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6g8vf"] Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.037490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-utilities\") pod \"community-operators-6g8vf\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.037985 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf2qj\" (UniqueName: \"kubernetes.io/projected/21be046f-b853-414b-97cb-f7056629ce28-kube-api-access-lf2qj\") pod \"community-operators-6g8vf\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.038028 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-catalog-content\") pod \"community-operators-6g8vf\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.082572 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:29:23 crc kubenswrapper[4881]: E0126 13:29:23.082845 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.139409 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-utilities\") pod \"community-operators-6g8vf\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.140065 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-utilities\") pod \"community-operators-6g8vf\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.140560 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf2qj\" (UniqueName: \"kubernetes.io/projected/21be046f-b853-414b-97cb-f7056629ce28-kube-api-access-lf2qj\") pod \"community-operators-6g8vf\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.140648 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-catalog-content\") pod \"community-operators-6g8vf\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.141148 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-catalog-content\") pod \"community-operators-6g8vf\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.165910 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf2qj\" (UniqueName: \"kubernetes.io/projected/21be046f-b853-414b-97cb-f7056629ce28-kube-api-access-lf2qj\") pod \"community-operators-6g8vf\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.312769 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:23 crc kubenswrapper[4881]: I0126 13:29:23.865401 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6g8vf"] Jan 26 13:29:24 crc kubenswrapper[4881]: I0126 13:29:24.201715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6g8vf" event={"ID":"21be046f-b853-414b-97cb-f7056629ce28","Type":"ContainerStarted","Data":"84f69d6e728ba3ff3f32e5549537722143d042b40db3fd657f9d613f0424c588"} Jan 26 13:29:25 crc kubenswrapper[4881]: I0126 13:29:25.232058 4881 generic.go:334] "Generic (PLEG): container finished" podID="21be046f-b853-414b-97cb-f7056629ce28" containerID="3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70" exitCode=0 Jan 26 13:29:25 crc kubenswrapper[4881]: I0126 13:29:25.232114 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6g8vf" event={"ID":"21be046f-b853-414b-97cb-f7056629ce28","Type":"ContainerDied","Data":"3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70"} Jan 26 13:29:26 crc kubenswrapper[4881]: I0126 13:29:26.244153 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6g8vf" event={"ID":"21be046f-b853-414b-97cb-f7056629ce28","Type":"ContainerStarted","Data":"d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee"} Jan 26 13:29:27 crc kubenswrapper[4881]: I0126 13:29:27.086718 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:29:27 crc kubenswrapper[4881]: I0126 13:29:27.087271 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="thanos-sidecar" containerID="cri-o://2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef" gracePeriod=600 Jan 26 13:29:27 crc kubenswrapper[4881]: I0126 13:29:27.087409 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="config-reloader" containerID="cri-o://58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59" gracePeriod=600 Jan 26 13:29:27 crc kubenswrapper[4881]: I0126 13:29:27.087876 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="prometheus" containerID="cri-o://50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284" gracePeriod=600 Jan 26 13:29:27 crc kubenswrapper[4881]: I0126 13:29:27.258169 4881 generic.go:334] "Generic (PLEG): container finished" podID="21be046f-b853-414b-97cb-f7056629ce28" containerID="d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee" exitCode=0 Jan 26 13:29:27 crc kubenswrapper[4881]: I0126 13:29:27.258237 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6g8vf" event={"ID":"21be046f-b853-414b-97cb-f7056629ce28","Type":"ContainerDied","Data":"d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee"} Jan 26 13:29:27 crc kubenswrapper[4881]: E0126 13:29:27.303480 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0783093_5301_4381_adfe_dc3d027975f8.slice/crio-2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef.scope\": RecentStats: unable to find data in memory cache]" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.160677 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.259580 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-1\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.259850 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.259869 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-thanos-prometheus-http-client-file\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.259904 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.259932 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a0783093-5301-4381-adfe-dc3d027975f8-config-out\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.259970 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-secret-combined-ca-bundle\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.259992 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j8rj\" (UniqueName: \"kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-kube-api-access-8j8rj\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.260024 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-2\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.260123 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-config\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.260147 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.260502 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.260844 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.260908 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-0\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.261015 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-tls-assets\") pod \"a0783093-5301-4381-adfe-dc3d027975f8\" (UID: \"a0783093-5301-4381-adfe-dc3d027975f8\") " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.261312 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.261882 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.261903 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.262123 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.266690 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-kube-api-access-8j8rj" (OuterVolumeSpecName: "kube-api-access-8j8rj") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "kube-api-access-8j8rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.268066 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.269332 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.269375 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.270137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.270718 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-config" (OuterVolumeSpecName: "config") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.274712 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0783093-5301-4381-adfe-dc3d027975f8-config-out" (OuterVolumeSpecName: "config-out") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.282766 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.289639 4881 generic.go:334] "Generic (PLEG): container finished" podID="a0783093-5301-4381-adfe-dc3d027975f8" containerID="2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef" exitCode=0 Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.289667 4881 generic.go:334] "Generic (PLEG): container finished" podID="a0783093-5301-4381-adfe-dc3d027975f8" containerID="58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59" exitCode=0 Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.289675 4881 generic.go:334] "Generic (PLEG): container finished" podID="a0783093-5301-4381-adfe-dc3d027975f8" containerID="50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284" exitCode=0 Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.289733 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a0783093-5301-4381-adfe-dc3d027975f8","Type":"ContainerDied","Data":"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef"} Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.289762 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a0783093-5301-4381-adfe-dc3d027975f8","Type":"ContainerDied","Data":"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59"} Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.289794 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a0783093-5301-4381-adfe-dc3d027975f8","Type":"ContainerDied","Data":"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284"} Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.289804 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a0783093-5301-4381-adfe-dc3d027975f8","Type":"ContainerDied","Data":"01217105e6b821489d53710013d1ec00a6322097c722af2e992c2855af9435dc"} Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.289820 4881 scope.go:117] "RemoveContainer" containerID="2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.290002 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.295472 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6g8vf" event={"ID":"21be046f-b853-414b-97cb-f7056629ce28","Type":"ContainerStarted","Data":"f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b"} Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.305428 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.328792 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6g8vf" podStartSLOduration=3.612057661 podStartE2EDuration="6.328771204s" podCreationTimestamp="2026-01-26 13:29:22 +0000 UTC" firstStartedPulling="2026-01-26 13:29:25.238694202 +0000 UTC m=+3237.718004228" lastFinishedPulling="2026-01-26 13:29:27.955407745 +0000 UTC m=+3240.434717771" observedRunningTime="2026-01-26 13:29:28.315762131 +0000 UTC m=+3240.795072157" watchObservedRunningTime="2026-01-26 13:29:28.328771204 +0000 UTC m=+3240.808081230" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.352455 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config" (OuterVolumeSpecName: "web-config") pod "a0783093-5301-4381-adfe-dc3d027975f8" (UID: "a0783093-5301-4381-adfe-dc3d027975f8"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364115 4881 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364147 4881 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364159 4881 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364168 4881 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a0783093-5301-4381-adfe-dc3d027975f8-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364180 4881 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364191 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j8rj\" (UniqueName: \"kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-kube-api-access-8j8rj\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364200 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364208 4881 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a0783093-5301-4381-adfe-dc3d027975f8-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364241 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") on node \"crc\" " Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364250 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a0783093-5301-4381-adfe-dc3d027975f8-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.364259 4881 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a0783093-5301-4381-adfe-dc3d027975f8-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.387382 4881 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.387694 4881 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4") on node "crc" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.452448 4881 scope.go:117] "RemoveContainer" containerID="58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.465850 4881 reconciler_common.go:293] "Volume detached for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.473844 4881 scope.go:117] "RemoveContainer" containerID="50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.495189 4881 scope.go:117] "RemoveContainer" containerID="0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.518043 4881 scope.go:117] "RemoveContainer" containerID="2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef" Jan 26 13:29:28 crc kubenswrapper[4881]: E0126 13:29:28.520141 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef\": container with ID starting with 2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef not found: ID does not exist" containerID="2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.520181 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef"} err="failed to get container status \"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef\": rpc error: code = NotFound desc = could not find container \"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef\": container with ID starting with 2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.520231 4881 scope.go:117] "RemoveContainer" containerID="58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59" Jan 26 13:29:28 crc kubenswrapper[4881]: E0126 13:29:28.520581 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59\": container with ID starting with 58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59 not found: ID does not exist" containerID="58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.520616 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59"} err="failed to get container status \"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59\": rpc error: code = NotFound desc = could not find container \"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59\": container with ID starting with 58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59 not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.520636 4881 scope.go:117] "RemoveContainer" containerID="50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284" Jan 26 13:29:28 crc kubenswrapper[4881]: E0126 13:29:28.531866 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284\": container with ID starting with 50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284 not found: ID does not exist" containerID="50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.531917 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284"} err="failed to get container status \"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284\": rpc error: code = NotFound desc = could not find container \"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284\": container with ID starting with 50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284 not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.531953 4881 scope.go:117] "RemoveContainer" containerID="0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461" Jan 26 13:29:28 crc kubenswrapper[4881]: E0126 13:29:28.534591 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461\": container with ID starting with 0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461 not found: ID does not exist" containerID="0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.534628 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461"} err="failed to get container status \"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461\": rpc error: code = NotFound desc = could not find container \"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461\": container with ID starting with 0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461 not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.534650 4881 scope.go:117] "RemoveContainer" containerID="2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.535109 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef"} err="failed to get container status \"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef\": rpc error: code = NotFound desc = could not find container \"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef\": container with ID starting with 2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.535161 4881 scope.go:117] "RemoveContainer" containerID="58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.535662 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59"} err="failed to get container status \"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59\": rpc error: code = NotFound desc = could not find container \"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59\": container with ID starting with 58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59 not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.535688 4881 scope.go:117] "RemoveContainer" containerID="50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.536380 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284"} err="failed to get container status \"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284\": rpc error: code = NotFound desc = could not find container \"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284\": container with ID starting with 50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284 not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.536414 4881 scope.go:117] "RemoveContainer" containerID="0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.537049 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461"} err="failed to get container status \"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461\": rpc error: code = NotFound desc = could not find container \"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461\": container with ID starting with 0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461 not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.537076 4881 scope.go:117] "RemoveContainer" containerID="2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.540715 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef"} err="failed to get container status \"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef\": rpc error: code = NotFound desc = could not find container \"2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef\": container with ID starting with 2e601063a7495673a44565484da3d0b6b733e98eca89ce5b9ad794c280a61cef not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.540773 4881 scope.go:117] "RemoveContainer" containerID="58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.541545 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59"} err="failed to get container status \"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59\": rpc error: code = NotFound desc = could not find container \"58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59\": container with ID starting with 58d4f36705576e7f002be4b5d469f91d7432b10a722f86b1944ac7faa35e3f59 not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.541572 4881 scope.go:117] "RemoveContainer" containerID="50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.541954 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284"} err="failed to get container status \"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284\": rpc error: code = NotFound desc = could not find container \"50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284\": container with ID starting with 50f4e67013c626fec742620472893ecab907d7e235f63d643837dc222dc3c284 not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.541979 4881 scope.go:117] "RemoveContainer" containerID="0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.542638 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461"} err="failed to get container status \"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461\": rpc error: code = NotFound desc = could not find container \"0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461\": container with ID starting with 0190dba6e2a4d9a1f0d0775cb8abefab1bbdad04d46cd971afc6c8bbeedce461 not found: ID does not exist" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.625146 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.636286 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.668799 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:29:28 crc kubenswrapper[4881]: E0126 13:29:28.669227 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="prometheus" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.669240 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="prometheus" Jan 26 13:29:28 crc kubenswrapper[4881]: E0126 13:29:28.669261 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="config-reloader" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.669267 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="config-reloader" Jan 26 13:29:28 crc kubenswrapper[4881]: E0126 13:29:28.669287 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="init-config-reloader" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.669293 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="init-config-reloader" Jan 26 13:29:28 crc kubenswrapper[4881]: E0126 13:29:28.669313 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="thanos-sidecar" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.669320 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="thanos-sidecar" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.669548 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="config-reloader" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.669577 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="thanos-sidecar" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.669603 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0783093-5301-4381-adfe-dc3d027975f8" containerName="prometheus" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.671975 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.679010 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.679236 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2qlwl" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.679392 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.679557 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.680651 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.680787 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.682913 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.705675 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.707247 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774049 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adce7384-2dc6-4e86-af0f-fb3b38627515-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774100 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774158 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adce7384-2dc6-4e86-af0f-fb3b38627515-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774330 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adce7384-2dc6-4e86-af0f-fb3b38627515-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774379 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62r9f\" (UniqueName: \"kubernetes.io/projected/adce7384-2dc6-4e86-af0f-fb3b38627515-kube-api-access-62r9f\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774431 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-config\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774609 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774646 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/adce7384-2dc6-4e86-af0f-fb3b38627515-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774710 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774860 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/adce7384-2dc6-4e86-af0f-fb3b38627515-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.774892 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.775049 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.775111 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.877576 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adce7384-2dc6-4e86-af0f-fb3b38627515-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878033 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878097 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adce7384-2dc6-4e86-af0f-fb3b38627515-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878136 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adce7384-2dc6-4e86-af0f-fb3b38627515-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878165 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62r9f\" (UniqueName: \"kubernetes.io/projected/adce7384-2dc6-4e86-af0f-fb3b38627515-kube-api-access-62r9f\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878190 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-config\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878413 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878446 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/adce7384-2dc6-4e86-af0f-fb3b38627515-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878483 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878567 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/adce7384-2dc6-4e86-af0f-fb3b38627515-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878602 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878670 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878709 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.878894 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adce7384-2dc6-4e86-af0f-fb3b38627515-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.879342 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/adce7384-2dc6-4e86-af0f-fb3b38627515-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.880194 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/adce7384-2dc6-4e86-af0f-fb3b38627515-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.881664 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.881941 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/68d2c87ef14797ce11fba4e65263a740afb8b7e8fd7775f7168ab753beb0af09/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.883292 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-config\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.884767 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adce7384-2dc6-4e86-af0f-fb3b38627515-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.885545 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.886260 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.890025 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.894662 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.899741 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62r9f\" (UniqueName: \"kubernetes.io/projected/adce7384-2dc6-4e86-af0f-fb3b38627515-kube-api-access-62r9f\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.900988 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adce7384-2dc6-4e86-af0f-fb3b38627515-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.911260 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adce7384-2dc6-4e86-af0f-fb3b38627515-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:28 crc kubenswrapper[4881]: I0126 13:29:28.976146 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dce83a91-fa62-48d4-9e19-6ff80a628aa4\") pod \"prometheus-metric-storage-0\" (UID: \"adce7384-2dc6-4e86-af0f-fb3b38627515\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.023146 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.193286 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4xdpt"] Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.196363 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.225871 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xdpt"] Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.301158 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gxhk\" (UniqueName: \"kubernetes.io/projected/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-kube-api-access-4gxhk\") pod \"redhat-marketplace-4xdpt\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.301351 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-utilities\") pod \"redhat-marketplace-4xdpt\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.301380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-catalog-content\") pod \"redhat-marketplace-4xdpt\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.404133 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-utilities\") pod \"redhat-marketplace-4xdpt\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.404183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-catalog-content\") pod \"redhat-marketplace-4xdpt\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.404243 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gxhk\" (UniqueName: \"kubernetes.io/projected/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-kube-api-access-4gxhk\") pod \"redhat-marketplace-4xdpt\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.404782 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-utilities\") pod \"redhat-marketplace-4xdpt\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.404902 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-catalog-content\") pod \"redhat-marketplace-4xdpt\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.446379 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gxhk\" (UniqueName: \"kubernetes.io/projected/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-kube-api-access-4gxhk\") pod \"redhat-marketplace-4xdpt\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.517470 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:29 crc kubenswrapper[4881]: W0126 13:29:29.603010 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadce7384_2dc6_4e86_af0f_fb3b38627515.slice/crio-a7e172d240b89b2729e118ab814f7d3603a1536bacd798a233266bb41b488772 WatchSource:0}: Error finding container a7e172d240b89b2729e118ab814f7d3603a1536bacd798a233266bb41b488772: Status 404 returned error can't find the container with id a7e172d240b89b2729e118ab814f7d3603a1536bacd798a233266bb41b488772 Jan 26 13:29:29 crc kubenswrapper[4881]: I0126 13:29:29.613224 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:29:30 crc kubenswrapper[4881]: I0126 13:29:30.098477 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0783093-5301-4381-adfe-dc3d027975f8" path="/var/lib/kubelet/pods/a0783093-5301-4381-adfe-dc3d027975f8/volumes" Jan 26 13:29:30 crc kubenswrapper[4881]: I0126 13:29:30.229672 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xdpt"] Jan 26 13:29:30 crc kubenswrapper[4881]: I0126 13:29:30.361211 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adce7384-2dc6-4e86-af0f-fb3b38627515","Type":"ContainerStarted","Data":"a7e172d240b89b2729e118ab814f7d3603a1536bacd798a233266bb41b488772"} Jan 26 13:29:30 crc kubenswrapper[4881]: I0126 13:29:30.362366 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xdpt" event={"ID":"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0","Type":"ContainerStarted","Data":"6c475335ae30e2c197de1a46b16751278178a97c487e756fd6981d91d97b2e30"} Jan 26 13:29:31 crc kubenswrapper[4881]: I0126 13:29:31.375106 4881 generic.go:334] "Generic (PLEG): container finished" podID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerID="b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7" exitCode=0 Jan 26 13:29:31 crc kubenswrapper[4881]: I0126 13:29:31.375286 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xdpt" event={"ID":"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0","Type":"ContainerDied","Data":"b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7"} Jan 26 13:29:33 crc kubenswrapper[4881]: I0126 13:29:33.313215 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:33 crc kubenswrapper[4881]: I0126 13:29:33.313653 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:33 crc kubenswrapper[4881]: I0126 13:29:33.408467 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:34 crc kubenswrapper[4881]: I0126 13:29:34.503932 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:35 crc kubenswrapper[4881]: I0126 13:29:35.083174 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:29:35 crc kubenswrapper[4881]: E0126 13:29:35.083597 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:29:35 crc kubenswrapper[4881]: I0126 13:29:35.579085 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6g8vf"] Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.431527 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adce7384-2dc6-4e86-af0f-fb3b38627515","Type":"ContainerStarted","Data":"425ce4fbf630d9068bd51c91930d546d8b7cee6a66683072789701d64f503b69"} Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.433635 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xdpt" event={"ID":"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0","Type":"ContainerStarted","Data":"f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0"} Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.433977 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6g8vf" podUID="21be046f-b853-414b-97cb-f7056629ce28" containerName="registry-server" containerID="cri-o://f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b" gracePeriod=2 Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.865510 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.878401 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-utilities\") pod \"21be046f-b853-414b-97cb-f7056629ce28\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.878449 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf2qj\" (UniqueName: \"kubernetes.io/projected/21be046f-b853-414b-97cb-f7056629ce28-kube-api-access-lf2qj\") pod \"21be046f-b853-414b-97cb-f7056629ce28\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.878470 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-catalog-content\") pod \"21be046f-b853-414b-97cb-f7056629ce28\" (UID: \"21be046f-b853-414b-97cb-f7056629ce28\") " Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.879112 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-utilities" (OuterVolumeSpecName: "utilities") pod "21be046f-b853-414b-97cb-f7056629ce28" (UID: "21be046f-b853-414b-97cb-f7056629ce28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.884800 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21be046f-b853-414b-97cb-f7056629ce28-kube-api-access-lf2qj" (OuterVolumeSpecName: "kube-api-access-lf2qj") pod "21be046f-b853-414b-97cb-f7056629ce28" (UID: "21be046f-b853-414b-97cb-f7056629ce28"). InnerVolumeSpecName "kube-api-access-lf2qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.940948 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21be046f-b853-414b-97cb-f7056629ce28" (UID: "21be046f-b853-414b-97cb-f7056629ce28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.980615 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.980923 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf2qj\" (UniqueName: \"kubernetes.io/projected/21be046f-b853-414b-97cb-f7056629ce28-kube-api-access-lf2qj\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:36 crc kubenswrapper[4881]: I0126 13:29:36.981015 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21be046f-b853-414b-97cb-f7056629ce28-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.443216 4881 generic.go:334] "Generic (PLEG): container finished" podID="21be046f-b853-414b-97cb-f7056629ce28" containerID="f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b" exitCode=0 Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.443276 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6g8vf" event={"ID":"21be046f-b853-414b-97cb-f7056629ce28","Type":"ContainerDied","Data":"f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b"} Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.443306 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6g8vf" event={"ID":"21be046f-b853-414b-97cb-f7056629ce28","Type":"ContainerDied","Data":"84f69d6e728ba3ff3f32e5549537722143d042b40db3fd657f9d613f0424c588"} Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.443323 4881 scope.go:117] "RemoveContainer" containerID="f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b" Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.443465 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6g8vf" Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.446313 4881 generic.go:334] "Generic (PLEG): container finished" podID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerID="f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0" exitCode=0 Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.446829 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xdpt" event={"ID":"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0","Type":"ContainerDied","Data":"f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0"} Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.477240 4881 scope.go:117] "RemoveContainer" containerID="d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee" Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.519699 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6g8vf"] Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.524224 4881 scope.go:117] "RemoveContainer" containerID="3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70" Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.530687 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6g8vf"] Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.559202 4881 scope.go:117] "RemoveContainer" containerID="f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b" Jan 26 13:29:37 crc kubenswrapper[4881]: E0126 13:29:37.559707 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b\": container with ID starting with f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b not found: ID does not exist" containerID="f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b" Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.559744 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b"} err="failed to get container status \"f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b\": rpc error: code = NotFound desc = could not find container \"f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b\": container with ID starting with f98f8752e5ae7926f7400b555500e7b043f992a852ddc6f1d6fa2aec4b4ab74b not found: ID does not exist" Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.559763 4881 scope.go:117] "RemoveContainer" containerID="d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee" Jan 26 13:29:37 crc kubenswrapper[4881]: E0126 13:29:37.560109 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee\": container with ID starting with d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee not found: ID does not exist" containerID="d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee" Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.560132 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee"} err="failed to get container status \"d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee\": rpc error: code = NotFound desc = could not find container \"d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee\": container with ID starting with d285d3fd8d47be47132b7062a55cd77994e28e15e6271c7db334d4048136b5ee not found: ID does not exist" Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.560157 4881 scope.go:117] "RemoveContainer" containerID="3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70" Jan 26 13:29:37 crc kubenswrapper[4881]: E0126 13:29:37.560696 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70\": container with ID starting with 3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70 not found: ID does not exist" containerID="3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70" Jan 26 13:29:37 crc kubenswrapper[4881]: I0126 13:29:37.560774 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70"} err="failed to get container status \"3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70\": rpc error: code = NotFound desc = could not find container \"3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70\": container with ID starting with 3b81ebd697afa9bb9fbea2e2fbc0995bed3c01d13978f85cf5c09bbb77fa1e70 not found: ID does not exist" Jan 26 13:29:37 crc kubenswrapper[4881]: E0126 13:29:37.578558 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21be046f_b853_414b_97cb_f7056629ce28.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21be046f_b853_414b_97cb_f7056629ce28.slice/crio-84f69d6e728ba3ff3f32e5549537722143d042b40db3fd657f9d613f0424c588\": RecentStats: unable to find data in memory cache]" Jan 26 13:29:38 crc kubenswrapper[4881]: I0126 13:29:38.099007 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21be046f-b853-414b-97cb-f7056629ce28" path="/var/lib/kubelet/pods/21be046f-b853-414b-97cb-f7056629ce28/volumes" Jan 26 13:29:38 crc kubenswrapper[4881]: I0126 13:29:38.464022 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xdpt" event={"ID":"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0","Type":"ContainerStarted","Data":"3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4"} Jan 26 13:29:38 crc kubenswrapper[4881]: I0126 13:29:38.493647 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4xdpt" podStartSLOduration=3.051178679 podStartE2EDuration="9.493623576s" podCreationTimestamp="2026-01-26 13:29:29 +0000 UTC" firstStartedPulling="2026-01-26 13:29:31.382616313 +0000 UTC m=+3243.861926379" lastFinishedPulling="2026-01-26 13:29:37.82506124 +0000 UTC m=+3250.304371276" observedRunningTime="2026-01-26 13:29:38.485893699 +0000 UTC m=+3250.965203725" watchObservedRunningTime="2026-01-26 13:29:38.493623576 +0000 UTC m=+3250.972933622" Jan 26 13:29:39 crc kubenswrapper[4881]: I0126 13:29:39.518748 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:39 crc kubenswrapper[4881]: I0126 13:29:39.518821 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:40 crc kubenswrapper[4881]: I0126 13:29:40.565697 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4xdpt" podUID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerName="registry-server" probeResult="failure" output=< Jan 26 13:29:40 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 13:29:40 crc kubenswrapper[4881]: > Jan 26 13:29:45 crc kubenswrapper[4881]: I0126 13:29:45.553455 4881 generic.go:334] "Generic (PLEG): container finished" podID="adce7384-2dc6-4e86-af0f-fb3b38627515" containerID="425ce4fbf630d9068bd51c91930d546d8b7cee6a66683072789701d64f503b69" exitCode=0 Jan 26 13:29:45 crc kubenswrapper[4881]: I0126 13:29:45.553775 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adce7384-2dc6-4e86-af0f-fb3b38627515","Type":"ContainerDied","Data":"425ce4fbf630d9068bd51c91930d546d8b7cee6a66683072789701d64f503b69"} Jan 26 13:29:46 crc kubenswrapper[4881]: I0126 13:29:46.082949 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:29:46 crc kubenswrapper[4881]: E0126 13:29:46.083498 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:29:46 crc kubenswrapper[4881]: I0126 13:29:46.582108 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adce7384-2dc6-4e86-af0f-fb3b38627515","Type":"ContainerStarted","Data":"9fec5e5f3437e63724df5a3bc14300ac0c90e0318628b958aeb0348daed1852a"} Jan 26 13:29:49 crc kubenswrapper[4881]: I0126 13:29:49.654395 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:49 crc kubenswrapper[4881]: I0126 13:29:49.701047 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:49 crc kubenswrapper[4881]: I0126 13:29:49.906163 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xdpt"] Jan 26 13:29:51 crc kubenswrapper[4881]: I0126 13:29:51.645552 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adce7384-2dc6-4e86-af0f-fb3b38627515","Type":"ContainerStarted","Data":"23601cfe193e87faae43bcdb4f8211099e14191c903753df2b4e4368a66562a9"} Jan 26 13:29:51 crc kubenswrapper[4881]: I0126 13:29:51.645863 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adce7384-2dc6-4e86-af0f-fb3b38627515","Type":"ContainerStarted","Data":"9caa0741804fdd99df9efbac37b81a99c13a611a6f5934958cf6f155284cda2f"} Jan 26 13:29:51 crc kubenswrapper[4881]: I0126 13:29:51.645712 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4xdpt" podUID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerName="registry-server" containerID="cri-o://3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4" gracePeriod=2 Jan 26 13:29:51 crc kubenswrapper[4881]: I0126 13:29:51.673688 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.673664806 podStartE2EDuration="23.673664806s" podCreationTimestamp="2026-01-26 13:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:29:51.671946575 +0000 UTC m=+3264.151256611" watchObservedRunningTime="2026-01-26 13:29:51.673664806 +0000 UTC m=+3264.152974832" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.146383 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.341836 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-utilities\") pod \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.341979 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-catalog-content\") pod \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.342055 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gxhk\" (UniqueName: \"kubernetes.io/projected/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-kube-api-access-4gxhk\") pod \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\" (UID: \"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0\") " Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.342668 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-utilities" (OuterVolumeSpecName: "utilities") pod "fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" (UID: "fe7af6b1-84bd-4aea-8c48-f6aefadd89a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.365821 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-kube-api-access-4gxhk" (OuterVolumeSpecName: "kube-api-access-4gxhk") pod "fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" (UID: "fe7af6b1-84bd-4aea-8c48-f6aefadd89a0"). InnerVolumeSpecName "kube-api-access-4gxhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.373300 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" (UID: "fe7af6b1-84bd-4aea-8c48-f6aefadd89a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.444493 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gxhk\" (UniqueName: \"kubernetes.io/projected/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-kube-api-access-4gxhk\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.444552 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.444564 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.656873 4881 generic.go:334] "Generic (PLEG): container finished" podID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerID="3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4" exitCode=0 Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.656931 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xdpt" event={"ID":"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0","Type":"ContainerDied","Data":"3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4"} Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.657035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xdpt" event={"ID":"fe7af6b1-84bd-4aea-8c48-f6aefadd89a0","Type":"ContainerDied","Data":"6c475335ae30e2c197de1a46b16751278178a97c487e756fd6981d91d97b2e30"} Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.657032 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xdpt" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.657064 4881 scope.go:117] "RemoveContainer" containerID="3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.683766 4881 scope.go:117] "RemoveContainer" containerID="f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.702428 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xdpt"] Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.713925 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xdpt"] Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.726038 4881 scope.go:117] "RemoveContainer" containerID="b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.775938 4881 scope.go:117] "RemoveContainer" containerID="3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4" Jan 26 13:29:52 crc kubenswrapper[4881]: E0126 13:29:52.776640 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4\": container with ID starting with 3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4 not found: ID does not exist" containerID="3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.776748 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4"} err="failed to get container status \"3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4\": rpc error: code = NotFound desc = could not find container \"3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4\": container with ID starting with 3234307f9dea024e249fbbe9b31eddb11a1843e89fcbfdc837e7344164d7edf4 not found: ID does not exist" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.776830 4881 scope.go:117] "RemoveContainer" containerID="f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0" Jan 26 13:29:52 crc kubenswrapper[4881]: E0126 13:29:52.777649 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0\": container with ID starting with f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0 not found: ID does not exist" containerID="f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.777698 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0"} err="failed to get container status \"f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0\": rpc error: code = NotFound desc = could not find container \"f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0\": container with ID starting with f57b07832dd16cd951c9f4990dae00230076fb09cda1522c661485f63884a4c0 not found: ID does not exist" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.777730 4881 scope.go:117] "RemoveContainer" containerID="b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7" Jan 26 13:29:52 crc kubenswrapper[4881]: E0126 13:29:52.778373 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7\": container with ID starting with b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7 not found: ID does not exist" containerID="b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7" Jan 26 13:29:52 crc kubenswrapper[4881]: I0126 13:29:52.778416 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7"} err="failed to get container status \"b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7\": rpc error: code = NotFound desc = could not find container \"b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7\": container with ID starting with b0e2dec2f2b7d0d401d53e1202a42dfca6af030acee5b9578a65d8b5f9295ec7 not found: ID does not exist" Jan 26 13:29:54 crc kubenswrapper[4881]: I0126 13:29:54.024782 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:54 crc kubenswrapper[4881]: I0126 13:29:54.102830 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" path="/var/lib/kubelet/pods/fe7af6b1-84bd-4aea-8c48-f6aefadd89a0/volumes" Jan 26 13:29:59 crc kubenswrapper[4881]: I0126 13:29:59.025075 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:59 crc kubenswrapper[4881]: I0126 13:29:59.042406 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 13:29:59 crc kubenswrapper[4881]: I0126 13:29:59.745841 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.088176 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:30:00 crc kubenswrapper[4881]: E0126 13:30:00.088383 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.147047 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj"] Jan 26 13:30:00 crc kubenswrapper[4881]: E0126 13:30:00.147670 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21be046f-b853-414b-97cb-f7056629ce28" containerName="extract-utilities" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.147715 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="21be046f-b853-414b-97cb-f7056629ce28" containerName="extract-utilities" Jan 26 13:30:00 crc kubenswrapper[4881]: E0126 13:30:00.147734 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21be046f-b853-414b-97cb-f7056629ce28" containerName="extract-content" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.147745 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="21be046f-b853-414b-97cb-f7056629ce28" containerName="extract-content" Jan 26 13:30:00 crc kubenswrapper[4881]: E0126 13:30:00.147773 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21be046f-b853-414b-97cb-f7056629ce28" containerName="registry-server" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.147793 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="21be046f-b853-414b-97cb-f7056629ce28" containerName="registry-server" Jan 26 13:30:00 crc kubenswrapper[4881]: E0126 13:30:00.147819 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerName="extract-utilities" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.147827 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerName="extract-utilities" Jan 26 13:30:00 crc kubenswrapper[4881]: E0126 13:30:00.147845 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerName="extract-content" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.147854 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerName="extract-content" Jan 26 13:30:00 crc kubenswrapper[4881]: E0126 13:30:00.147868 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerName="registry-server" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.147883 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerName="registry-server" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.148199 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe7af6b1-84bd-4aea-8c48-f6aefadd89a0" containerName="registry-server" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.148229 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="21be046f-b853-414b-97cb-f7056629ce28" containerName="registry-server" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.149227 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.153826 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.154025 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.156926 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj"] Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.226557 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3196a145-c310-4e39-ab2e-e6e44993f4e9-secret-volume\") pod \"collect-profiles-29490570-pjxtj\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.226725 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzhpw\" (UniqueName: \"kubernetes.io/projected/3196a145-c310-4e39-ab2e-e6e44993f4e9-kube-api-access-xzhpw\") pod \"collect-profiles-29490570-pjxtj\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.226754 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3196a145-c310-4e39-ab2e-e6e44993f4e9-config-volume\") pod \"collect-profiles-29490570-pjxtj\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.328486 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3196a145-c310-4e39-ab2e-e6e44993f4e9-secret-volume\") pod \"collect-profiles-29490570-pjxtj\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.328685 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzhpw\" (UniqueName: \"kubernetes.io/projected/3196a145-c310-4e39-ab2e-e6e44993f4e9-kube-api-access-xzhpw\") pod \"collect-profiles-29490570-pjxtj\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.328738 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3196a145-c310-4e39-ab2e-e6e44993f4e9-config-volume\") pod \"collect-profiles-29490570-pjxtj\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.330197 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3196a145-c310-4e39-ab2e-e6e44993f4e9-config-volume\") pod \"collect-profiles-29490570-pjxtj\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.345364 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3196a145-c310-4e39-ab2e-e6e44993f4e9-secret-volume\") pod \"collect-profiles-29490570-pjxtj\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.349889 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzhpw\" (UniqueName: \"kubernetes.io/projected/3196a145-c310-4e39-ab2e-e6e44993f4e9-kube-api-access-xzhpw\") pod \"collect-profiles-29490570-pjxtj\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:00 crc kubenswrapper[4881]: I0126 13:30:00.482274 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:01 crc kubenswrapper[4881]: I0126 13:30:01.017855 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj"] Jan 26 13:30:01 crc kubenswrapper[4881]: I0126 13:30:01.765997 4881 generic.go:334] "Generic (PLEG): container finished" podID="3196a145-c310-4e39-ab2e-e6e44993f4e9" containerID="1289fff4699c84f890d7b124be30444b4be3aa90591968acd4f1960df88f3624" exitCode=0 Jan 26 13:30:01 crc kubenswrapper[4881]: I0126 13:30:01.766053 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" event={"ID":"3196a145-c310-4e39-ab2e-e6e44993f4e9","Type":"ContainerDied","Data":"1289fff4699c84f890d7b124be30444b4be3aa90591968acd4f1960df88f3624"} Jan 26 13:30:01 crc kubenswrapper[4881]: I0126 13:30:01.766323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" event={"ID":"3196a145-c310-4e39-ab2e-e6e44993f4e9","Type":"ContainerStarted","Data":"db6439542edf58a1f5d4537b6beb996a2ebbc0071559787ffe991c65b921cf30"} Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.352649 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.406966 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3196a145-c310-4e39-ab2e-e6e44993f4e9-config-volume\") pod \"3196a145-c310-4e39-ab2e-e6e44993f4e9\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.407357 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzhpw\" (UniqueName: \"kubernetes.io/projected/3196a145-c310-4e39-ab2e-e6e44993f4e9-kube-api-access-xzhpw\") pod \"3196a145-c310-4e39-ab2e-e6e44993f4e9\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.407401 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3196a145-c310-4e39-ab2e-e6e44993f4e9-secret-volume\") pod \"3196a145-c310-4e39-ab2e-e6e44993f4e9\" (UID: \"3196a145-c310-4e39-ab2e-e6e44993f4e9\") " Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.408125 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3196a145-c310-4e39-ab2e-e6e44993f4e9-config-volume" (OuterVolumeSpecName: "config-volume") pod "3196a145-c310-4e39-ab2e-e6e44993f4e9" (UID: "3196a145-c310-4e39-ab2e-e6e44993f4e9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.414445 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3196a145-c310-4e39-ab2e-e6e44993f4e9-kube-api-access-xzhpw" (OuterVolumeSpecName: "kube-api-access-xzhpw") pod "3196a145-c310-4e39-ab2e-e6e44993f4e9" (UID: "3196a145-c310-4e39-ab2e-e6e44993f4e9"). InnerVolumeSpecName "kube-api-access-xzhpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.419232 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3196a145-c310-4e39-ab2e-e6e44993f4e9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3196a145-c310-4e39-ab2e-e6e44993f4e9" (UID: "3196a145-c310-4e39-ab2e-e6e44993f4e9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.509801 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3196a145-c310-4e39-ab2e-e6e44993f4e9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.509831 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzhpw\" (UniqueName: \"kubernetes.io/projected/3196a145-c310-4e39-ab2e-e6e44993f4e9-kube-api-access-xzhpw\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.509843 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3196a145-c310-4e39-ab2e-e6e44993f4e9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.814571 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" event={"ID":"3196a145-c310-4e39-ab2e-e6e44993f4e9","Type":"ContainerDied","Data":"db6439542edf58a1f5d4537b6beb996a2ebbc0071559787ffe991c65b921cf30"} Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.814730 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db6439542edf58a1f5d4537b6beb996a2ebbc0071559787ffe991c65b921cf30" Jan 26 13:30:03 crc kubenswrapper[4881]: I0126 13:30:03.814736 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj" Jan 26 13:30:04 crc kubenswrapper[4881]: I0126 13:30:04.437817 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq"] Jan 26 13:30:04 crc kubenswrapper[4881]: I0126 13:30:04.451893 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490525-5chxq"] Jan 26 13:30:06 crc kubenswrapper[4881]: I0126 13:30:06.100112 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43796436-22b5-498f-8446-8f08d2f82305" path="/var/lib/kubelet/pods/43796436-22b5-498f-8446-8f08d2f82305/volumes" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.003742 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 13:30:12 crc kubenswrapper[4881]: E0126 13:30:12.004742 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3196a145-c310-4e39-ab2e-e6e44993f4e9" containerName="collect-profiles" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.004759 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3196a145-c310-4e39-ab2e-e6e44993f4e9" containerName="collect-profiles" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.005042 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3196a145-c310-4e39-ab2e-e6e44993f4e9" containerName="collect-profiles" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.005924 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.042260 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j8j29" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.042490 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.042714 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.042852 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.042740 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.083211 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:30:12 crc kubenswrapper[4881]: E0126 13:30:12.083618 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.094649 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.094698 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.094974 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.095147 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.095303 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-config-data\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.095430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rkf\" (UniqueName: \"kubernetes.io/projected/fb8ddd97-c952-48e2-b3df-f594646b4377-kube-api-access-c5rkf\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.095782 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.095969 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.096045 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.197659 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.197756 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.197807 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.197846 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.197878 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.197974 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.198022 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.198068 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-config-data\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.198118 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rkf\" (UniqueName: \"kubernetes.io/projected/fb8ddd97-c952-48e2-b3df-f594646b4377-kube-api-access-c5rkf\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.198199 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.199343 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.199700 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.199956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.200363 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-config-data\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.204451 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.205512 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.218580 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.232356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rkf\" (UniqueName: \"kubernetes.io/projected/fb8ddd97-c952-48e2-b3df-f594646b4377-kube-api-access-c5rkf\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.237209 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " pod="openstack/tempest-tests-tempest" Jan 26 13:30:12 crc kubenswrapper[4881]: I0126 13:30:12.368286 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 13:30:13 crc kubenswrapper[4881]: I0126 13:30:12.951975 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 13:30:13 crc kubenswrapper[4881]: I0126 13:30:13.916675 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fb8ddd97-c952-48e2-b3df-f594646b4377","Type":"ContainerStarted","Data":"219aa5b2d8ad736ed47da53afae42965ce126831691c9a26656b34955bcafb59"} Jan 26 13:30:24 crc kubenswrapper[4881]: I0126 13:30:24.083718 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:30:24 crc kubenswrapper[4881]: E0126 13:30:24.085023 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:30:28 crc kubenswrapper[4881]: I0126 13:30:28.058920 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fb8ddd97-c952-48e2-b3df-f594646b4377","Type":"ContainerStarted","Data":"464f96f94fb33ebef6daa835a396c5f278ee1d77600993cf680dc17d5309f911"} Jan 26 13:30:28 crc kubenswrapper[4881]: I0126 13:30:28.110757 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.147344125 podStartE2EDuration="18.110701063s" podCreationTimestamp="2026-01-26 13:30:10 +0000 UTC" firstStartedPulling="2026-01-26 13:30:12.9595869 +0000 UTC m=+3285.438896926" lastFinishedPulling="2026-01-26 13:30:26.922943798 +0000 UTC m=+3299.402253864" observedRunningTime="2026-01-26 13:30:28.097282959 +0000 UTC m=+3300.576593005" watchObservedRunningTime="2026-01-26 13:30:28.110701063 +0000 UTC m=+3300.590011099" Jan 26 13:30:38 crc kubenswrapper[4881]: I0126 13:30:38.092830 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:30:38 crc kubenswrapper[4881]: E0126 13:30:38.093774 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:30:51 crc kubenswrapper[4881]: I0126 13:30:51.083068 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:30:51 crc kubenswrapper[4881]: E0126 13:30:51.084018 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:30:55 crc kubenswrapper[4881]: I0126 13:30:55.999202 4881 scope.go:117] "RemoveContainer" containerID="bd28aa9b377d6898a1ca53e5a69b248ef971bf424eb0f043e01fe7d042d1192f" Jan 26 13:31:05 crc kubenswrapper[4881]: I0126 13:31:05.083209 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:31:05 crc kubenswrapper[4881]: E0126 13:31:05.084477 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:31:18 crc kubenswrapper[4881]: I0126 13:31:18.093344 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:31:18 crc kubenswrapper[4881]: E0126 13:31:18.094079 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:31:33 crc kubenswrapper[4881]: I0126 13:31:33.083045 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:31:33 crc kubenswrapper[4881]: E0126 13:31:33.084269 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:31:41 crc kubenswrapper[4881]: I0126 13:31:41.790028 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ll6dn"] Jan 26 13:31:41 crc kubenswrapper[4881]: I0126 13:31:41.794006 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:41 crc kubenswrapper[4881]: I0126 13:31:41.817081 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ll6dn"] Jan 26 13:31:41 crc kubenswrapper[4881]: I0126 13:31:41.902031 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-catalog-content\") pod \"certified-operators-ll6dn\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:41 crc kubenswrapper[4881]: I0126 13:31:41.902120 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sqlg\" (UniqueName: \"kubernetes.io/projected/5f40589a-4328-4dbb-8b83-9a1e34b51e39-kube-api-access-4sqlg\") pod \"certified-operators-ll6dn\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:41 crc kubenswrapper[4881]: I0126 13:31:41.902500 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-utilities\") pod \"certified-operators-ll6dn\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:42 crc kubenswrapper[4881]: I0126 13:31:42.004319 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-utilities\") pod \"certified-operators-ll6dn\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:42 crc kubenswrapper[4881]: I0126 13:31:42.004395 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-catalog-content\") pod \"certified-operators-ll6dn\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:42 crc kubenswrapper[4881]: I0126 13:31:42.004438 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sqlg\" (UniqueName: \"kubernetes.io/projected/5f40589a-4328-4dbb-8b83-9a1e34b51e39-kube-api-access-4sqlg\") pod \"certified-operators-ll6dn\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:42 crc kubenswrapper[4881]: I0126 13:31:42.004889 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-catalog-content\") pod \"certified-operators-ll6dn\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:42 crc kubenswrapper[4881]: I0126 13:31:42.005010 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-utilities\") pod \"certified-operators-ll6dn\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:42 crc kubenswrapper[4881]: I0126 13:31:42.042795 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sqlg\" (UniqueName: \"kubernetes.io/projected/5f40589a-4328-4dbb-8b83-9a1e34b51e39-kube-api-access-4sqlg\") pod \"certified-operators-ll6dn\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:42 crc kubenswrapper[4881]: I0126 13:31:42.114833 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:42 crc kubenswrapper[4881]: I0126 13:31:42.672762 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ll6dn"] Jan 26 13:31:42 crc kubenswrapper[4881]: W0126 13:31:42.675647 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f40589a_4328_4dbb_8b83_9a1e34b51e39.slice/crio-1bb7a8eae36b34c45a1f4bc77551dc951e01d7c2a5ee54edca8effaff2f093fa WatchSource:0}: Error finding container 1bb7a8eae36b34c45a1f4bc77551dc951e01d7c2a5ee54edca8effaff2f093fa: Status 404 returned error can't find the container with id 1bb7a8eae36b34c45a1f4bc77551dc951e01d7c2a5ee54edca8effaff2f093fa Jan 26 13:31:42 crc kubenswrapper[4881]: I0126 13:31:42.884213 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ll6dn" event={"ID":"5f40589a-4328-4dbb-8b83-9a1e34b51e39","Type":"ContainerStarted","Data":"1bb7a8eae36b34c45a1f4bc77551dc951e01d7c2a5ee54edca8effaff2f093fa"} Jan 26 13:31:43 crc kubenswrapper[4881]: I0126 13:31:43.901940 4881 generic.go:334] "Generic (PLEG): container finished" podID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerID="7b5e73ddd4965cc6d71030a1a2f2a60a8d8997bb3db4bfa0046752098e0f4663" exitCode=0 Jan 26 13:31:43 crc kubenswrapper[4881]: I0126 13:31:43.902165 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ll6dn" event={"ID":"5f40589a-4328-4dbb-8b83-9a1e34b51e39","Type":"ContainerDied","Data":"7b5e73ddd4965cc6d71030a1a2f2a60a8d8997bb3db4bfa0046752098e0f4663"} Jan 26 13:31:44 crc kubenswrapper[4881]: I0126 13:31:44.083293 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:31:44 crc kubenswrapper[4881]: E0126 13:31:44.083739 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:31:44 crc kubenswrapper[4881]: I0126 13:31:44.914098 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ll6dn" event={"ID":"5f40589a-4328-4dbb-8b83-9a1e34b51e39","Type":"ContainerStarted","Data":"b7ad4ad5f3142932d3c8af7fe813e70a8a1c44e6859701e1da94b3d46b88f322"} Jan 26 13:31:45 crc kubenswrapper[4881]: I0126 13:31:45.931947 4881 generic.go:334] "Generic (PLEG): container finished" podID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerID="b7ad4ad5f3142932d3c8af7fe813e70a8a1c44e6859701e1da94b3d46b88f322" exitCode=0 Jan 26 13:31:45 crc kubenswrapper[4881]: I0126 13:31:45.932092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ll6dn" event={"ID":"5f40589a-4328-4dbb-8b83-9a1e34b51e39","Type":"ContainerDied","Data":"b7ad4ad5f3142932d3c8af7fe813e70a8a1c44e6859701e1da94b3d46b88f322"} Jan 26 13:31:46 crc kubenswrapper[4881]: I0126 13:31:46.943759 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ll6dn" event={"ID":"5f40589a-4328-4dbb-8b83-9a1e34b51e39","Type":"ContainerStarted","Data":"b2ca62233829285d9854bd4a2b46e41c4434191ea2ae131e6bb9059a97c81af7"} Jan 26 13:31:46 crc kubenswrapper[4881]: I0126 13:31:46.959900 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ll6dn" podStartSLOduration=3.529527633 podStartE2EDuration="5.959880152s" podCreationTimestamp="2026-01-26 13:31:41 +0000 UTC" firstStartedPulling="2026-01-26 13:31:43.90467801 +0000 UTC m=+3376.383988036" lastFinishedPulling="2026-01-26 13:31:46.335030529 +0000 UTC m=+3378.814340555" observedRunningTime="2026-01-26 13:31:46.958986571 +0000 UTC m=+3379.438296607" watchObservedRunningTime="2026-01-26 13:31:46.959880152 +0000 UTC m=+3379.439190178" Jan 26 13:31:52 crc kubenswrapper[4881]: I0126 13:31:52.115043 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:52 crc kubenswrapper[4881]: I0126 13:31:52.115573 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:52 crc kubenswrapper[4881]: I0126 13:31:52.163103 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:53 crc kubenswrapper[4881]: I0126 13:31:53.062505 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:55 crc kubenswrapper[4881]: I0126 13:31:55.083109 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:31:55 crc kubenswrapper[4881]: E0126 13:31:55.083398 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:31:55 crc kubenswrapper[4881]: I0126 13:31:55.603922 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ll6dn"] Jan 26 13:31:55 crc kubenswrapper[4881]: I0126 13:31:55.604385 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ll6dn" podUID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerName="registry-server" containerID="cri-o://b2ca62233829285d9854bd4a2b46e41c4434191ea2ae131e6bb9059a97c81af7" gracePeriod=2 Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.063558 4881 generic.go:334] "Generic (PLEG): container finished" podID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerID="b2ca62233829285d9854bd4a2b46e41c4434191ea2ae131e6bb9059a97c81af7" exitCode=0 Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.063886 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ll6dn" event={"ID":"5f40589a-4328-4dbb-8b83-9a1e34b51e39","Type":"ContainerDied","Data":"b2ca62233829285d9854bd4a2b46e41c4434191ea2ae131e6bb9059a97c81af7"} Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.169917 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.329574 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-catalog-content\") pod \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.329648 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-utilities\") pod \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.329840 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sqlg\" (UniqueName: \"kubernetes.io/projected/5f40589a-4328-4dbb-8b83-9a1e34b51e39-kube-api-access-4sqlg\") pod \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\" (UID: \"5f40589a-4328-4dbb-8b83-9a1e34b51e39\") " Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.331382 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-utilities" (OuterVolumeSpecName: "utilities") pod "5f40589a-4328-4dbb-8b83-9a1e34b51e39" (UID: "5f40589a-4328-4dbb-8b83-9a1e34b51e39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.342225 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f40589a-4328-4dbb-8b83-9a1e34b51e39-kube-api-access-4sqlg" (OuterVolumeSpecName: "kube-api-access-4sqlg") pod "5f40589a-4328-4dbb-8b83-9a1e34b51e39" (UID: "5f40589a-4328-4dbb-8b83-9a1e34b51e39"). InnerVolumeSpecName "kube-api-access-4sqlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.389998 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f40589a-4328-4dbb-8b83-9a1e34b51e39" (UID: "5f40589a-4328-4dbb-8b83-9a1e34b51e39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.432367 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sqlg\" (UniqueName: \"kubernetes.io/projected/5f40589a-4328-4dbb-8b83-9a1e34b51e39-kube-api-access-4sqlg\") on node \"crc\" DevicePath \"\"" Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.432398 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:31:56 crc kubenswrapper[4881]: I0126 13:31:56.432412 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f40589a-4328-4dbb-8b83-9a1e34b51e39-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:31:57 crc kubenswrapper[4881]: I0126 13:31:57.077019 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ll6dn" event={"ID":"5f40589a-4328-4dbb-8b83-9a1e34b51e39","Type":"ContainerDied","Data":"1bb7a8eae36b34c45a1f4bc77551dc951e01d7c2a5ee54edca8effaff2f093fa"} Jan 26 13:31:57 crc kubenswrapper[4881]: I0126 13:31:57.077086 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ll6dn" Jan 26 13:31:57 crc kubenswrapper[4881]: I0126 13:31:57.077416 4881 scope.go:117] "RemoveContainer" containerID="b2ca62233829285d9854bd4a2b46e41c4434191ea2ae131e6bb9059a97c81af7" Jan 26 13:31:57 crc kubenswrapper[4881]: I0126 13:31:57.105258 4881 scope.go:117] "RemoveContainer" containerID="b7ad4ad5f3142932d3c8af7fe813e70a8a1c44e6859701e1da94b3d46b88f322" Jan 26 13:31:57 crc kubenswrapper[4881]: I0126 13:31:57.129357 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ll6dn"] Jan 26 13:31:57 crc kubenswrapper[4881]: I0126 13:31:57.143753 4881 scope.go:117] "RemoveContainer" containerID="7b5e73ddd4965cc6d71030a1a2f2a60a8d8997bb3db4bfa0046752098e0f4663" Jan 26 13:31:57 crc kubenswrapper[4881]: I0126 13:31:57.147848 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ll6dn"] Jan 26 13:31:58 crc kubenswrapper[4881]: I0126 13:31:58.098095 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" path="/var/lib/kubelet/pods/5f40589a-4328-4dbb-8b83-9a1e34b51e39/volumes" Jan 26 13:32:08 crc kubenswrapper[4881]: I0126 13:32:08.101797 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:32:08 crc kubenswrapper[4881]: E0126 13:32:08.103175 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:32:21 crc kubenswrapper[4881]: I0126 13:32:21.082760 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:32:21 crc kubenswrapper[4881]: E0126 13:32:21.084710 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:32:33 crc kubenswrapper[4881]: I0126 13:32:33.082903 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:32:33 crc kubenswrapper[4881]: E0126 13:32:33.083732 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:32:47 crc kubenswrapper[4881]: I0126 13:32:47.083284 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:32:47 crc kubenswrapper[4881]: E0126 13:32:47.084332 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:32:58 crc kubenswrapper[4881]: I0126 13:32:58.082695 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:32:59 crc kubenswrapper[4881]: I0126 13:32:59.014702 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"c9e1d204d627a7aa343969c0d6bfc304bec28661675d10fb334138d7faa7131a"} Jan 26 13:35:24 crc kubenswrapper[4881]: I0126 13:35:24.789494 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:35:24 crc kubenswrapper[4881]: I0126 13:35:24.790143 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:35:54 crc kubenswrapper[4881]: I0126 13:35:54.789685 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:35:54 crc kubenswrapper[4881]: I0126 13:35:54.790288 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:36:24 crc kubenswrapper[4881]: I0126 13:36:24.789973 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:36:24 crc kubenswrapper[4881]: I0126 13:36:24.790698 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:36:24 crc kubenswrapper[4881]: I0126 13:36:24.790762 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 13:36:24 crc kubenswrapper[4881]: I0126 13:36:24.791851 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9e1d204d627a7aa343969c0d6bfc304bec28661675d10fb334138d7faa7131a"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:36:24 crc kubenswrapper[4881]: I0126 13:36:24.791933 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://c9e1d204d627a7aa343969c0d6bfc304bec28661675d10fb334138d7faa7131a" gracePeriod=600 Jan 26 13:36:25 crc kubenswrapper[4881]: I0126 13:36:25.236639 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="c9e1d204d627a7aa343969c0d6bfc304bec28661675d10fb334138d7faa7131a" exitCode=0 Jan 26 13:36:25 crc kubenswrapper[4881]: I0126 13:36:25.236694 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"c9e1d204d627a7aa343969c0d6bfc304bec28661675d10fb334138d7faa7131a"} Jan 26 13:36:25 crc kubenswrapper[4881]: I0126 13:36:25.237219 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c"} Jan 26 13:36:25 crc kubenswrapper[4881]: I0126 13:36:25.237241 4881 scope.go:117] "RemoveContainer" containerID="742ede60393d7ca8a4393cea6341038ffa8eebf9a1b4a11fff9f1b373a984190" Jan 26 13:38:54 crc kubenswrapper[4881]: I0126 13:38:54.789794 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:38:54 crc kubenswrapper[4881]: I0126 13:38:54.790650 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.750336 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7v9v2"] Jan 26 13:39:13 crc kubenswrapper[4881]: E0126 13:39:13.752011 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerName="extract-utilities" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.752048 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerName="extract-utilities" Jan 26 13:39:13 crc kubenswrapper[4881]: E0126 13:39:13.752073 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerName="registry-server" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.752090 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerName="registry-server" Jan 26 13:39:13 crc kubenswrapper[4881]: E0126 13:39:13.752163 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerName="extract-content" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.752181 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerName="extract-content" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.752705 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f40589a-4328-4dbb-8b83-9a1e34b51e39" containerName="registry-server" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.759427 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.761788 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7v9v2"] Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.872011 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd5ww\" (UniqueName: \"kubernetes.io/projected/3d66b211-ef99-4e0e-b0a4-62edec791446-kube-api-access-nd5ww\") pod \"redhat-operators-7v9v2\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.872202 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-utilities\") pod \"redhat-operators-7v9v2\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.872277 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-catalog-content\") pod \"redhat-operators-7v9v2\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.974550 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-catalog-content\") pod \"redhat-operators-7v9v2\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.974755 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd5ww\" (UniqueName: \"kubernetes.io/projected/3d66b211-ef99-4e0e-b0a4-62edec791446-kube-api-access-nd5ww\") pod \"redhat-operators-7v9v2\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.974905 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-utilities\") pod \"redhat-operators-7v9v2\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.975336 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-catalog-content\") pod \"redhat-operators-7v9v2\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:13 crc kubenswrapper[4881]: I0126 13:39:13.975370 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-utilities\") pod \"redhat-operators-7v9v2\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:14 crc kubenswrapper[4881]: I0126 13:39:14.113277 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd5ww\" (UniqueName: \"kubernetes.io/projected/3d66b211-ef99-4e0e-b0a4-62edec791446-kube-api-access-nd5ww\") pod \"redhat-operators-7v9v2\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:14 crc kubenswrapper[4881]: I0126 13:39:14.398103 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:14 crc kubenswrapper[4881]: I0126 13:39:14.929478 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7v9v2"] Jan 26 13:39:15 crc kubenswrapper[4881]: I0126 13:39:15.111041 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7v9v2" event={"ID":"3d66b211-ef99-4e0e-b0a4-62edec791446","Type":"ContainerStarted","Data":"f9d047cda963f05b5b1409925ee2cde4a44c08a84e10a89859b0662b9c71abf9"} Jan 26 13:39:16 crc kubenswrapper[4881]: I0126 13:39:16.128938 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerID="7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe" exitCode=0 Jan 26 13:39:16 crc kubenswrapper[4881]: I0126 13:39:16.129286 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7v9v2" event={"ID":"3d66b211-ef99-4e0e-b0a4-62edec791446","Type":"ContainerDied","Data":"7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe"} Jan 26 13:39:16 crc kubenswrapper[4881]: I0126 13:39:16.131737 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:39:17 crc kubenswrapper[4881]: I0126 13:39:17.139935 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7v9v2" event={"ID":"3d66b211-ef99-4e0e-b0a4-62edec791446","Type":"ContainerStarted","Data":"bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8"} Jan 26 13:39:20 crc kubenswrapper[4881]: I0126 13:39:20.172484 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerID="bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8" exitCode=0 Jan 26 13:39:20 crc kubenswrapper[4881]: I0126 13:39:20.172584 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7v9v2" event={"ID":"3d66b211-ef99-4e0e-b0a4-62edec791446","Type":"ContainerDied","Data":"bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8"} Jan 26 13:39:21 crc kubenswrapper[4881]: I0126 13:39:21.184758 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7v9v2" event={"ID":"3d66b211-ef99-4e0e-b0a4-62edec791446","Type":"ContainerStarted","Data":"e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b"} Jan 26 13:39:22 crc kubenswrapper[4881]: I0126 13:39:22.223136 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7v9v2" podStartSLOduration=4.470530603 podStartE2EDuration="9.223113036s" podCreationTimestamp="2026-01-26 13:39:13 +0000 UTC" firstStartedPulling="2026-01-26 13:39:16.131264095 +0000 UTC m=+3828.610574141" lastFinishedPulling="2026-01-26 13:39:20.883846508 +0000 UTC m=+3833.363156574" observedRunningTime="2026-01-26 13:39:22.215503388 +0000 UTC m=+3834.694813414" watchObservedRunningTime="2026-01-26 13:39:22.223113036 +0000 UTC m=+3834.702423062" Jan 26 13:39:24 crc kubenswrapper[4881]: I0126 13:39:24.399182 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:24 crc kubenswrapper[4881]: I0126 13:39:24.400597 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:24 crc kubenswrapper[4881]: I0126 13:39:24.789252 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:39:24 crc kubenswrapper[4881]: I0126 13:39:24.789618 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:39:25 crc kubenswrapper[4881]: I0126 13:39:25.457031 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7v9v2" podUID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerName="registry-server" probeResult="failure" output=< Jan 26 13:39:25 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 13:39:25 crc kubenswrapper[4881]: > Jan 26 13:39:34 crc kubenswrapper[4881]: I0126 13:39:34.457237 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:34 crc kubenswrapper[4881]: I0126 13:39:34.523681 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:34 crc kubenswrapper[4881]: I0126 13:39:34.696894 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7v9v2"] Jan 26 13:39:36 crc kubenswrapper[4881]: I0126 13:39:36.356630 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7v9v2" podUID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerName="registry-server" containerID="cri-o://e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b" gracePeriod=2 Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.023454 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.212059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-utilities\") pod \"3d66b211-ef99-4e0e-b0a4-62edec791446\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.212331 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-catalog-content\") pod \"3d66b211-ef99-4e0e-b0a4-62edec791446\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.212375 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd5ww\" (UniqueName: \"kubernetes.io/projected/3d66b211-ef99-4e0e-b0a4-62edec791446-kube-api-access-nd5ww\") pod \"3d66b211-ef99-4e0e-b0a4-62edec791446\" (UID: \"3d66b211-ef99-4e0e-b0a4-62edec791446\") " Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.215725 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-utilities" (OuterVolumeSpecName: "utilities") pod "3d66b211-ef99-4e0e-b0a4-62edec791446" (UID: "3d66b211-ef99-4e0e-b0a4-62edec791446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.259775 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d66b211-ef99-4e0e-b0a4-62edec791446-kube-api-access-nd5ww" (OuterVolumeSpecName: "kube-api-access-nd5ww") pod "3d66b211-ef99-4e0e-b0a4-62edec791446" (UID: "3d66b211-ef99-4e0e-b0a4-62edec791446"). InnerVolumeSpecName "kube-api-access-nd5ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.315328 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd5ww\" (UniqueName: \"kubernetes.io/projected/3d66b211-ef99-4e0e-b0a4-62edec791446-kube-api-access-nd5ww\") on node \"crc\" DevicePath \"\"" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.315576 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.366560 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerID="e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b" exitCode=0 Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.367893 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7v9v2" event={"ID":"3d66b211-ef99-4e0e-b0a4-62edec791446","Type":"ContainerDied","Data":"e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b"} Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.368331 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7v9v2" event={"ID":"3d66b211-ef99-4e0e-b0a4-62edec791446","Type":"ContainerDied","Data":"f9d047cda963f05b5b1409925ee2cde4a44c08a84e10a89859b0662b9c71abf9"} Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.367985 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7v9v2" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.368391 4881 scope.go:117] "RemoveContainer" containerID="e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.388495 4881 scope.go:117] "RemoveContainer" containerID="bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.396873 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d66b211-ef99-4e0e-b0a4-62edec791446" (UID: "3d66b211-ef99-4e0e-b0a4-62edec791446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.417906 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d66b211-ef99-4e0e-b0a4-62edec791446-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.419998 4881 scope.go:117] "RemoveContainer" containerID="7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.469050 4881 scope.go:117] "RemoveContainer" containerID="e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b" Jan 26 13:39:37 crc kubenswrapper[4881]: E0126 13:39:37.469940 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b\": container with ID starting with e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b not found: ID does not exist" containerID="e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.470110 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b"} err="failed to get container status \"e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b\": rpc error: code = NotFound desc = could not find container \"e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b\": container with ID starting with e4b8c38ed21e0260c6f8dc6aa2c280e124627801de7a472474aedd955fff345b not found: ID does not exist" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.470214 4881 scope.go:117] "RemoveContainer" containerID="bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8" Jan 26 13:39:37 crc kubenswrapper[4881]: E0126 13:39:37.471009 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8\": container with ID starting with bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8 not found: ID does not exist" containerID="bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.471080 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8"} err="failed to get container status \"bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8\": rpc error: code = NotFound desc = could not find container \"bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8\": container with ID starting with bdd9748b7cd409c64a17f08a94e98535d5848270d6660bece0b4455e5ac0f2c8 not found: ID does not exist" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.471137 4881 scope.go:117] "RemoveContainer" containerID="7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe" Jan 26 13:39:37 crc kubenswrapper[4881]: E0126 13:39:37.471654 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe\": container with ID starting with 7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe not found: ID does not exist" containerID="7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.471758 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe"} err="failed to get container status \"7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe\": rpc error: code = NotFound desc = could not find container \"7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe\": container with ID starting with 7064be5d5ac9280e9332ee6fac31318a8e46f995737b1355e6e4ca838c4cc2fe not found: ID does not exist" Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.711930 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7v9v2"] Jan 26 13:39:37 crc kubenswrapper[4881]: I0126 13:39:37.719809 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7v9v2"] Jan 26 13:39:38 crc kubenswrapper[4881]: I0126 13:39:38.103062 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d66b211-ef99-4e0e-b0a4-62edec791446" path="/var/lib/kubelet/pods/3d66b211-ef99-4e0e-b0a4-62edec791446/volumes" Jan 26 13:39:54 crc kubenswrapper[4881]: I0126 13:39:54.789614 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:39:54 crc kubenswrapper[4881]: I0126 13:39:54.790207 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:39:54 crc kubenswrapper[4881]: I0126 13:39:54.790260 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 13:39:54 crc kubenswrapper[4881]: I0126 13:39:54.791155 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:39:54 crc kubenswrapper[4881]: I0126 13:39:54.791223 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" gracePeriod=600 Jan 26 13:39:54 crc kubenswrapper[4881]: E0126 13:39:54.964766 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:39:55 crc kubenswrapper[4881]: I0126 13:39:55.575786 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" exitCode=0 Jan 26 13:39:55 crc kubenswrapper[4881]: I0126 13:39:55.575830 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c"} Jan 26 13:39:55 crc kubenswrapper[4881]: I0126 13:39:55.575862 4881 scope.go:117] "RemoveContainer" containerID="c9e1d204d627a7aa343969c0d6bfc304bec28661675d10fb334138d7faa7131a" Jan 26 13:39:55 crc kubenswrapper[4881]: I0126 13:39:55.576508 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:39:55 crc kubenswrapper[4881]: E0126 13:39:55.576959 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:40:07 crc kubenswrapper[4881]: I0126 13:40:07.083952 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:40:07 crc kubenswrapper[4881]: E0126 13:40:07.085361 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.059356 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k47fh"] Jan 26 13:40:13 crc kubenswrapper[4881]: E0126 13:40:13.061228 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerName="extract-content" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.061259 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerName="extract-content" Jan 26 13:40:13 crc kubenswrapper[4881]: E0126 13:40:13.061294 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerName="registry-server" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.061312 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerName="registry-server" Jan 26 13:40:13 crc kubenswrapper[4881]: E0126 13:40:13.061345 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerName="extract-utilities" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.061363 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerName="extract-utilities" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.061841 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d66b211-ef99-4e0e-b0a4-62edec791446" containerName="registry-server" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.064950 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.074770 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k47fh"] Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.110221 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnhbj\" (UniqueName: \"kubernetes.io/projected/f2ebdbef-8007-4d20-b1df-676b975b6583-kube-api-access-hnhbj\") pod \"community-operators-k47fh\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.110393 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-catalog-content\") pod \"community-operators-k47fh\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.110488 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-utilities\") pod \"community-operators-k47fh\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.212719 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnhbj\" (UniqueName: \"kubernetes.io/projected/f2ebdbef-8007-4d20-b1df-676b975b6583-kube-api-access-hnhbj\") pod \"community-operators-k47fh\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.212844 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-catalog-content\") pod \"community-operators-k47fh\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.212923 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-utilities\") pod \"community-operators-k47fh\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.213327 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-catalog-content\") pod \"community-operators-k47fh\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.213731 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-utilities\") pod \"community-operators-k47fh\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.235493 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnhbj\" (UniqueName: \"kubernetes.io/projected/f2ebdbef-8007-4d20-b1df-676b975b6583-kube-api-access-hnhbj\") pod \"community-operators-k47fh\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.386022 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:13 crc kubenswrapper[4881]: I0126 13:40:13.918018 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k47fh"] Jan 26 13:40:14 crc kubenswrapper[4881]: I0126 13:40:14.781191 4881 generic.go:334] "Generic (PLEG): container finished" podID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerID="87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254" exitCode=0 Jan 26 13:40:14 crc kubenswrapper[4881]: I0126 13:40:14.781255 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k47fh" event={"ID":"f2ebdbef-8007-4d20-b1df-676b975b6583","Type":"ContainerDied","Data":"87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254"} Jan 26 13:40:14 crc kubenswrapper[4881]: I0126 13:40:14.781859 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k47fh" event={"ID":"f2ebdbef-8007-4d20-b1df-676b975b6583","Type":"ContainerStarted","Data":"9b7cfb9a0867bff00c0db9e5c304685fd051cbdbfdebe85d34d1458539a89c90"} Jan 26 13:40:16 crc kubenswrapper[4881]: I0126 13:40:16.805505 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k47fh" event={"ID":"f2ebdbef-8007-4d20-b1df-676b975b6583","Type":"ContainerStarted","Data":"8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f"} Jan 26 13:40:17 crc kubenswrapper[4881]: I0126 13:40:17.819172 4881 generic.go:334] "Generic (PLEG): container finished" podID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerID="8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f" exitCode=0 Jan 26 13:40:17 crc kubenswrapper[4881]: I0126 13:40:17.819227 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k47fh" event={"ID":"f2ebdbef-8007-4d20-b1df-676b975b6583","Type":"ContainerDied","Data":"8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f"} Jan 26 13:40:19 crc kubenswrapper[4881]: I0126 13:40:19.082457 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:40:19 crc kubenswrapper[4881]: E0126 13:40:19.083170 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:40:19 crc kubenswrapper[4881]: I0126 13:40:19.849956 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k47fh" event={"ID":"f2ebdbef-8007-4d20-b1df-676b975b6583","Type":"ContainerStarted","Data":"5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae"} Jan 26 13:40:19 crc kubenswrapper[4881]: I0126 13:40:19.887007 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k47fh" podStartSLOduration=2.896600272 podStartE2EDuration="6.886979611s" podCreationTimestamp="2026-01-26 13:40:13 +0000 UTC" firstStartedPulling="2026-01-26 13:40:14.784652544 +0000 UTC m=+3887.263962590" lastFinishedPulling="2026-01-26 13:40:18.775031903 +0000 UTC m=+3891.254341929" observedRunningTime="2026-01-26 13:40:19.876969106 +0000 UTC m=+3892.356279172" watchObservedRunningTime="2026-01-26 13:40:19.886979611 +0000 UTC m=+3892.366289677" Jan 26 13:40:23 crc kubenswrapper[4881]: I0126 13:40:23.387080 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:23 crc kubenswrapper[4881]: I0126 13:40:23.387412 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:23 crc kubenswrapper[4881]: I0126 13:40:23.486474 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:23 crc kubenswrapper[4881]: I0126 13:40:23.949434 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:24 crc kubenswrapper[4881]: I0126 13:40:24.014942 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k47fh"] Jan 26 13:40:25 crc kubenswrapper[4881]: I0126 13:40:25.924995 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k47fh" podUID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerName="registry-server" containerID="cri-o://5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae" gracePeriod=2 Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.493924 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.545337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-catalog-content\") pod \"f2ebdbef-8007-4d20-b1df-676b975b6583\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.545508 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-utilities\") pod \"f2ebdbef-8007-4d20-b1df-676b975b6583\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.545662 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnhbj\" (UniqueName: \"kubernetes.io/projected/f2ebdbef-8007-4d20-b1df-676b975b6583-kube-api-access-hnhbj\") pod \"f2ebdbef-8007-4d20-b1df-676b975b6583\" (UID: \"f2ebdbef-8007-4d20-b1df-676b975b6583\") " Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.547756 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-utilities" (OuterVolumeSpecName: "utilities") pod "f2ebdbef-8007-4d20-b1df-676b975b6583" (UID: "f2ebdbef-8007-4d20-b1df-676b975b6583"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.568300 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2ebdbef-8007-4d20-b1df-676b975b6583-kube-api-access-hnhbj" (OuterVolumeSpecName: "kube-api-access-hnhbj") pod "f2ebdbef-8007-4d20-b1df-676b975b6583" (UID: "f2ebdbef-8007-4d20-b1df-676b975b6583"). InnerVolumeSpecName "kube-api-access-hnhbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.610534 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2ebdbef-8007-4d20-b1df-676b975b6583" (UID: "f2ebdbef-8007-4d20-b1df-676b975b6583"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.649068 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnhbj\" (UniqueName: \"kubernetes.io/projected/f2ebdbef-8007-4d20-b1df-676b975b6583-kube-api-access-hnhbj\") on node \"crc\" DevicePath \"\"" Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.649106 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.649120 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdbef-8007-4d20-b1df-676b975b6583-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.940857 4881 generic.go:334] "Generic (PLEG): container finished" podID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerID="5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae" exitCode=0 Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.940921 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k47fh" event={"ID":"f2ebdbef-8007-4d20-b1df-676b975b6583","Type":"ContainerDied","Data":"5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae"} Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.940972 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k47fh" event={"ID":"f2ebdbef-8007-4d20-b1df-676b975b6583","Type":"ContainerDied","Data":"9b7cfb9a0867bff00c0db9e5c304685fd051cbdbfdebe85d34d1458539a89c90"} Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.940974 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k47fh" Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.940997 4881 scope.go:117] "RemoveContainer" containerID="5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae" Jan 26 13:40:26 crc kubenswrapper[4881]: I0126 13:40:26.993745 4881 scope.go:117] "RemoveContainer" containerID="8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f" Jan 26 13:40:27 crc kubenswrapper[4881]: I0126 13:40:27.003542 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k47fh"] Jan 26 13:40:27 crc kubenswrapper[4881]: I0126 13:40:27.013784 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k47fh"] Jan 26 13:40:27 crc kubenswrapper[4881]: I0126 13:40:27.022818 4881 scope.go:117] "RemoveContainer" containerID="87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254" Jan 26 13:40:27 crc kubenswrapper[4881]: I0126 13:40:27.064928 4881 scope.go:117] "RemoveContainer" containerID="5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae" Jan 26 13:40:27 crc kubenswrapper[4881]: E0126 13:40:27.065655 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae\": container with ID starting with 5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae not found: ID does not exist" containerID="5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae" Jan 26 13:40:27 crc kubenswrapper[4881]: I0126 13:40:27.065765 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae"} err="failed to get container status \"5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae\": rpc error: code = NotFound desc = could not find container \"5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae\": container with ID starting with 5914a5575731fc8779aa564af38256b3dc50fea1536fcde86a4c7a4a8d847dae not found: ID does not exist" Jan 26 13:40:27 crc kubenswrapper[4881]: I0126 13:40:27.065841 4881 scope.go:117] "RemoveContainer" containerID="8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f" Jan 26 13:40:27 crc kubenswrapper[4881]: E0126 13:40:27.066316 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f\": container with ID starting with 8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f not found: ID does not exist" containerID="8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f" Jan 26 13:40:27 crc kubenswrapper[4881]: I0126 13:40:27.066399 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f"} err="failed to get container status \"8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f\": rpc error: code = NotFound desc = could not find container \"8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f\": container with ID starting with 8684eee56fbd96e5d2a2dcb2afef2a44e552f7ec25ee55170ff55436cc83571f not found: ID does not exist" Jan 26 13:40:27 crc kubenswrapper[4881]: I0126 13:40:27.066470 4881 scope.go:117] "RemoveContainer" containerID="87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254" Jan 26 13:40:27 crc kubenswrapper[4881]: E0126 13:40:27.067050 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254\": container with ID starting with 87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254 not found: ID does not exist" containerID="87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254" Jan 26 13:40:27 crc kubenswrapper[4881]: I0126 13:40:27.067105 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254"} err="failed to get container status \"87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254\": rpc error: code = NotFound desc = could not find container \"87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254\": container with ID starting with 87a87decbff48655767c06c54a732ef39b6f32b56a6e74110827cc7de61ca254 not found: ID does not exist" Jan 26 13:40:28 crc kubenswrapper[4881]: I0126 13:40:28.096371 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2ebdbef-8007-4d20-b1df-676b975b6583" path="/var/lib/kubelet/pods/f2ebdbef-8007-4d20-b1df-676b975b6583/volumes" Jan 26 13:40:34 crc kubenswrapper[4881]: I0126 13:40:34.083686 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:40:34 crc kubenswrapper[4881]: E0126 13:40:34.088734 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:40:47 crc kubenswrapper[4881]: I0126 13:40:47.082808 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:40:47 crc kubenswrapper[4881]: E0126 13:40:47.083879 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:41:00 crc kubenswrapper[4881]: I0126 13:41:00.087210 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:41:00 crc kubenswrapper[4881]: E0126 13:41:00.088551 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:41:13 crc kubenswrapper[4881]: I0126 13:41:13.082483 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:41:13 crc kubenswrapper[4881]: E0126 13:41:13.083498 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:41:27 crc kubenswrapper[4881]: I0126 13:41:27.082693 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:41:27 crc kubenswrapper[4881]: E0126 13:41:27.083502 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:41:42 crc kubenswrapper[4881]: I0126 13:41:42.084090 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:41:42 crc kubenswrapper[4881]: E0126 13:41:42.085454 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:41:54 crc kubenswrapper[4881]: I0126 13:41:54.083181 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:41:54 crc kubenswrapper[4881]: E0126 13:41:54.084205 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:42:06 crc kubenswrapper[4881]: I0126 13:42:06.082639 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:42:06 crc kubenswrapper[4881]: E0126 13:42:06.083702 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:42:18 crc kubenswrapper[4881]: I0126 13:42:18.090034 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:42:18 crc kubenswrapper[4881]: E0126 13:42:18.091307 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:42:26 crc kubenswrapper[4881]: E0126 13:42:26.313937 4881 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:41108->38.102.83.69:37913: write tcp 38.102.83.69:41108->38.102.83.69:37913: write: broken pipe Jan 26 13:42:30 crc kubenswrapper[4881]: I0126 13:42:30.083636 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:42:30 crc kubenswrapper[4881]: E0126 13:42:30.084922 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.535839 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zvnmx"] Jan 26 13:42:39 crc kubenswrapper[4881]: E0126 13:42:39.538629 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerName="extract-content" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.538882 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerName="extract-content" Jan 26 13:42:39 crc kubenswrapper[4881]: E0126 13:42:39.538986 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerName="extract-utilities" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.539081 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerName="extract-utilities" Jan 26 13:42:39 crc kubenswrapper[4881]: E0126 13:42:39.539234 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerName="registry-server" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.539331 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerName="registry-server" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.540050 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ebdbef-8007-4d20-b1df-676b975b6583" containerName="registry-server" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.543471 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.554309 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zvnmx"] Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.655641 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-catalog-content\") pod \"certified-operators-zvnmx\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.655735 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-utilities\") pod \"certified-operators-zvnmx\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.655790 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6j2h\" (UniqueName: \"kubernetes.io/projected/427b554a-1f0c-421d-b642-d6ed18f449e5-kube-api-access-x6j2h\") pod \"certified-operators-zvnmx\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.757879 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-catalog-content\") pod \"certified-operators-zvnmx\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.758243 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-utilities\") pod \"certified-operators-zvnmx\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.758405 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6j2h\" (UniqueName: \"kubernetes.io/projected/427b554a-1f0c-421d-b642-d6ed18f449e5-kube-api-access-x6j2h\") pod \"certified-operators-zvnmx\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.758501 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-catalog-content\") pod \"certified-operators-zvnmx\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.758753 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-utilities\") pod \"certified-operators-zvnmx\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.797972 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6j2h\" (UniqueName: \"kubernetes.io/projected/427b554a-1f0c-421d-b642-d6ed18f449e5-kube-api-access-x6j2h\") pod \"certified-operators-zvnmx\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:39 crc kubenswrapper[4881]: I0126 13:42:39.886236 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:40 crc kubenswrapper[4881]: I0126 13:42:40.465239 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zvnmx"] Jan 26 13:42:41 crc kubenswrapper[4881]: I0126 13:42:41.431176 4881 generic.go:334] "Generic (PLEG): container finished" podID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerID="2389c3840bf3706ed7959591cf6a3b8015d66195cfc26f101acbafce27be9912" exitCode=0 Jan 26 13:42:41 crc kubenswrapper[4881]: I0126 13:42:41.431327 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvnmx" event={"ID":"427b554a-1f0c-421d-b642-d6ed18f449e5","Type":"ContainerDied","Data":"2389c3840bf3706ed7959591cf6a3b8015d66195cfc26f101acbafce27be9912"} Jan 26 13:42:41 crc kubenswrapper[4881]: I0126 13:42:41.431803 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvnmx" event={"ID":"427b554a-1f0c-421d-b642-d6ed18f449e5","Type":"ContainerStarted","Data":"0ccea11f73f8c43f800d6ecd1341ce9308b2a295622db3a8c54a6e5f7c8da425"} Jan 26 13:42:42 crc kubenswrapper[4881]: I0126 13:42:42.085024 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:42:42 crc kubenswrapper[4881]: E0126 13:42:42.086878 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:42:42 crc kubenswrapper[4881]: I0126 13:42:42.443452 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvnmx" event={"ID":"427b554a-1f0c-421d-b642-d6ed18f449e5","Type":"ContainerStarted","Data":"1e7ac7eadbf37b1bd4db5de954f3a66224d419cfe0aeacd389bb0fefc12c95f8"} Jan 26 13:42:44 crc kubenswrapper[4881]: I0126 13:42:44.470181 4881 generic.go:334] "Generic (PLEG): container finished" podID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerID="1e7ac7eadbf37b1bd4db5de954f3a66224d419cfe0aeacd389bb0fefc12c95f8" exitCode=0 Jan 26 13:42:44 crc kubenswrapper[4881]: I0126 13:42:44.470284 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvnmx" event={"ID":"427b554a-1f0c-421d-b642-d6ed18f449e5","Type":"ContainerDied","Data":"1e7ac7eadbf37b1bd4db5de954f3a66224d419cfe0aeacd389bb0fefc12c95f8"} Jan 26 13:42:45 crc kubenswrapper[4881]: I0126 13:42:45.484755 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvnmx" event={"ID":"427b554a-1f0c-421d-b642-d6ed18f449e5","Type":"ContainerStarted","Data":"58d21adb6e9a0c69a2ff8482b665c15514e4768257699c4a7932e819eb8be4de"} Jan 26 13:42:45 crc kubenswrapper[4881]: I0126 13:42:45.518853 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zvnmx" podStartSLOduration=3.05081392 podStartE2EDuration="6.518832347s" podCreationTimestamp="2026-01-26 13:42:39 +0000 UTC" firstStartedPulling="2026-01-26 13:42:41.433851421 +0000 UTC m=+4033.913161467" lastFinishedPulling="2026-01-26 13:42:44.901869858 +0000 UTC m=+4037.381179894" observedRunningTime="2026-01-26 13:42:45.506629198 +0000 UTC m=+4037.985939274" watchObservedRunningTime="2026-01-26 13:42:45.518832347 +0000 UTC m=+4037.998142383" Jan 26 13:42:49 crc kubenswrapper[4881]: I0126 13:42:49.886941 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:49 crc kubenswrapper[4881]: I0126 13:42:49.887759 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:50 crc kubenswrapper[4881]: I0126 13:42:50.115638 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:50 crc kubenswrapper[4881]: I0126 13:42:50.595366 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:50 crc kubenswrapper[4881]: I0126 13:42:50.662491 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zvnmx"] Jan 26 13:42:52 crc kubenswrapper[4881]: I0126 13:42:52.564770 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zvnmx" podUID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerName="registry-server" containerID="cri-o://58d21adb6e9a0c69a2ff8482b665c15514e4768257699c4a7932e819eb8be4de" gracePeriod=2 Jan 26 13:42:53 crc kubenswrapper[4881]: I0126 13:42:53.590667 4881 generic.go:334] "Generic (PLEG): container finished" podID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerID="58d21adb6e9a0c69a2ff8482b665c15514e4768257699c4a7932e819eb8be4de" exitCode=0 Jan 26 13:42:53 crc kubenswrapper[4881]: I0126 13:42:53.590743 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvnmx" event={"ID":"427b554a-1f0c-421d-b642-d6ed18f449e5","Type":"ContainerDied","Data":"58d21adb6e9a0c69a2ff8482b665c15514e4768257699c4a7932e819eb8be4de"} Jan 26 13:42:53 crc kubenswrapper[4881]: I0126 13:42:53.784409 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:53 crc kubenswrapper[4881]: I0126 13:42:53.929206 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-utilities\") pod \"427b554a-1f0c-421d-b642-d6ed18f449e5\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " Jan 26 13:42:53 crc kubenswrapper[4881]: I0126 13:42:53.929320 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-catalog-content\") pod \"427b554a-1f0c-421d-b642-d6ed18f449e5\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " Jan 26 13:42:53 crc kubenswrapper[4881]: I0126 13:42:53.929375 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6j2h\" (UniqueName: \"kubernetes.io/projected/427b554a-1f0c-421d-b642-d6ed18f449e5-kube-api-access-x6j2h\") pod \"427b554a-1f0c-421d-b642-d6ed18f449e5\" (UID: \"427b554a-1f0c-421d-b642-d6ed18f449e5\") " Jan 26 13:42:53 crc kubenswrapper[4881]: I0126 13:42:53.931148 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-utilities" (OuterVolumeSpecName: "utilities") pod "427b554a-1f0c-421d-b642-d6ed18f449e5" (UID: "427b554a-1f0c-421d-b642-d6ed18f449e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:42:53 crc kubenswrapper[4881]: I0126 13:42:53.936703 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/427b554a-1f0c-421d-b642-d6ed18f449e5-kube-api-access-x6j2h" (OuterVolumeSpecName: "kube-api-access-x6j2h") pod "427b554a-1f0c-421d-b642-d6ed18f449e5" (UID: "427b554a-1f0c-421d-b642-d6ed18f449e5"). InnerVolumeSpecName "kube-api-access-x6j2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:42:53 crc kubenswrapper[4881]: I0126 13:42:53.976039 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "427b554a-1f0c-421d-b642-d6ed18f449e5" (UID: "427b554a-1f0c-421d-b642-d6ed18f449e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:42:54 crc kubenswrapper[4881]: I0126 13:42:54.032989 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:42:54 crc kubenswrapper[4881]: I0126 13:42:54.033044 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427b554a-1f0c-421d-b642-d6ed18f449e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:42:54 crc kubenswrapper[4881]: I0126 13:42:54.033066 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6j2h\" (UniqueName: \"kubernetes.io/projected/427b554a-1f0c-421d-b642-d6ed18f449e5-kube-api-access-x6j2h\") on node \"crc\" DevicePath \"\"" Jan 26 13:42:54 crc kubenswrapper[4881]: I0126 13:42:54.606250 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvnmx" event={"ID":"427b554a-1f0c-421d-b642-d6ed18f449e5","Type":"ContainerDied","Data":"0ccea11f73f8c43f800d6ecd1341ce9308b2a295622db3a8c54a6e5f7c8da425"} Jan 26 13:42:54 crc kubenswrapper[4881]: I0126 13:42:54.606436 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zvnmx" Jan 26 13:42:54 crc kubenswrapper[4881]: I0126 13:42:54.607006 4881 scope.go:117] "RemoveContainer" containerID="58d21adb6e9a0c69a2ff8482b665c15514e4768257699c4a7932e819eb8be4de" Jan 26 13:42:54 crc kubenswrapper[4881]: I0126 13:42:54.653376 4881 scope.go:117] "RemoveContainer" containerID="1e7ac7eadbf37b1bd4db5de954f3a66224d419cfe0aeacd389bb0fefc12c95f8" Jan 26 13:42:54 crc kubenswrapper[4881]: I0126 13:42:54.657538 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zvnmx"] Jan 26 13:42:54 crc kubenswrapper[4881]: I0126 13:42:54.675616 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zvnmx"] Jan 26 13:42:54 crc kubenswrapper[4881]: I0126 13:42:54.717258 4881 scope.go:117] "RemoveContainer" containerID="2389c3840bf3706ed7959591cf6a3b8015d66195cfc26f101acbafce27be9912" Jan 26 13:42:55 crc kubenswrapper[4881]: I0126 13:42:55.085988 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:42:55 crc kubenswrapper[4881]: E0126 13:42:55.087255 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:42:56 crc kubenswrapper[4881]: I0126 13:42:56.098358 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="427b554a-1f0c-421d-b642-d6ed18f449e5" path="/var/lib/kubelet/pods/427b554a-1f0c-421d-b642-d6ed18f449e5/volumes" Jan 26 13:43:07 crc kubenswrapper[4881]: I0126 13:43:07.083205 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:43:07 crc kubenswrapper[4881]: E0126 13:43:07.084130 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.554190 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6pl2k"] Jan 26 13:43:16 crc kubenswrapper[4881]: E0126 13:43:16.556247 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerName="registry-server" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.556283 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerName="registry-server" Jan 26 13:43:16 crc kubenswrapper[4881]: E0126 13:43:16.556331 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerName="extract-utilities" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.556347 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerName="extract-utilities" Jan 26 13:43:16 crc kubenswrapper[4881]: E0126 13:43:16.556401 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerName="extract-content" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.556420 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerName="extract-content" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.556945 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="427b554a-1f0c-421d-b642-d6ed18f449e5" containerName="registry-server" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.559857 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.582454 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pl2k"] Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.724928 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-catalog-content\") pod \"redhat-marketplace-6pl2k\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.725126 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-utilities\") pod \"redhat-marketplace-6pl2k\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.725200 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7hc\" (UniqueName: \"kubernetes.io/projected/63297b73-d468-404f-8ee0-672bad8f88ea-kube-api-access-8z7hc\") pod \"redhat-marketplace-6pl2k\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.827689 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-utilities\") pod \"redhat-marketplace-6pl2k\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.827782 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z7hc\" (UniqueName: \"kubernetes.io/projected/63297b73-d468-404f-8ee0-672bad8f88ea-kube-api-access-8z7hc\") pod \"redhat-marketplace-6pl2k\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.828181 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-catalog-content\") pod \"redhat-marketplace-6pl2k\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.828214 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-utilities\") pod \"redhat-marketplace-6pl2k\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.828492 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-catalog-content\") pod \"redhat-marketplace-6pl2k\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.874317 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z7hc\" (UniqueName: \"kubernetes.io/projected/63297b73-d468-404f-8ee0-672bad8f88ea-kube-api-access-8z7hc\") pod \"redhat-marketplace-6pl2k\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:16 crc kubenswrapper[4881]: I0126 13:43:16.904637 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:17 crc kubenswrapper[4881]: I0126 13:43:17.411867 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pl2k"] Jan 26 13:43:17 crc kubenswrapper[4881]: I0126 13:43:17.914938 4881 generic.go:334] "Generic (PLEG): container finished" podID="63297b73-d468-404f-8ee0-672bad8f88ea" containerID="49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea" exitCode=0 Jan 26 13:43:17 crc kubenswrapper[4881]: I0126 13:43:17.915015 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pl2k" event={"ID":"63297b73-d468-404f-8ee0-672bad8f88ea","Type":"ContainerDied","Data":"49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea"} Jan 26 13:43:17 crc kubenswrapper[4881]: I0126 13:43:17.915555 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pl2k" event={"ID":"63297b73-d468-404f-8ee0-672bad8f88ea","Type":"ContainerStarted","Data":"f072cde21175a1212f9c51d5b3c009d8a72f297424ea28d87452697a1bb37b47"} Jan 26 13:43:18 crc kubenswrapper[4881]: I0126 13:43:18.928461 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pl2k" event={"ID":"63297b73-d468-404f-8ee0-672bad8f88ea","Type":"ContainerStarted","Data":"63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b"} Jan 26 13:43:19 crc kubenswrapper[4881]: I0126 13:43:19.941374 4881 generic.go:334] "Generic (PLEG): container finished" podID="63297b73-d468-404f-8ee0-672bad8f88ea" containerID="63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b" exitCode=0 Jan 26 13:43:19 crc kubenswrapper[4881]: I0126 13:43:19.941640 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pl2k" event={"ID":"63297b73-d468-404f-8ee0-672bad8f88ea","Type":"ContainerDied","Data":"63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b"} Jan 26 13:43:20 crc kubenswrapper[4881]: I0126 13:43:20.954707 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pl2k" event={"ID":"63297b73-d468-404f-8ee0-672bad8f88ea","Type":"ContainerStarted","Data":"a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db"} Jan 26 13:43:20 crc kubenswrapper[4881]: I0126 13:43:20.986081 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6pl2k" podStartSLOduration=2.488188919 podStartE2EDuration="4.986058881s" podCreationTimestamp="2026-01-26 13:43:16 +0000 UTC" firstStartedPulling="2026-01-26 13:43:17.919411172 +0000 UTC m=+4070.398721228" lastFinishedPulling="2026-01-26 13:43:20.417281164 +0000 UTC m=+4072.896591190" observedRunningTime="2026-01-26 13:43:20.973442151 +0000 UTC m=+4073.452752227" watchObservedRunningTime="2026-01-26 13:43:20.986058881 +0000 UTC m=+4073.465368917" Jan 26 13:43:22 crc kubenswrapper[4881]: I0126 13:43:22.082453 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:43:22 crc kubenswrapper[4881]: E0126 13:43:22.082981 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:43:26 crc kubenswrapper[4881]: I0126 13:43:26.905786 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:26 crc kubenswrapper[4881]: I0126 13:43:26.906438 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:26 crc kubenswrapper[4881]: I0126 13:43:26.976621 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:27 crc kubenswrapper[4881]: I0126 13:43:27.098162 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:27 crc kubenswrapper[4881]: I0126 13:43:27.217475 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pl2k"] Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.051817 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6pl2k" podUID="63297b73-d468-404f-8ee0-672bad8f88ea" containerName="registry-server" containerID="cri-o://a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db" gracePeriod=2 Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.636087 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.738457 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z7hc\" (UniqueName: \"kubernetes.io/projected/63297b73-d468-404f-8ee0-672bad8f88ea-kube-api-access-8z7hc\") pod \"63297b73-d468-404f-8ee0-672bad8f88ea\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.738660 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-catalog-content\") pod \"63297b73-d468-404f-8ee0-672bad8f88ea\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.738794 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-utilities\") pod \"63297b73-d468-404f-8ee0-672bad8f88ea\" (UID: \"63297b73-d468-404f-8ee0-672bad8f88ea\") " Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.740067 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-utilities" (OuterVolumeSpecName: "utilities") pod "63297b73-d468-404f-8ee0-672bad8f88ea" (UID: "63297b73-d468-404f-8ee0-672bad8f88ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.747308 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63297b73-d468-404f-8ee0-672bad8f88ea-kube-api-access-8z7hc" (OuterVolumeSpecName: "kube-api-access-8z7hc") pod "63297b73-d468-404f-8ee0-672bad8f88ea" (UID: "63297b73-d468-404f-8ee0-672bad8f88ea"). InnerVolumeSpecName "kube-api-access-8z7hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.782457 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63297b73-d468-404f-8ee0-672bad8f88ea" (UID: "63297b73-d468-404f-8ee0-672bad8f88ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.842283 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z7hc\" (UniqueName: \"kubernetes.io/projected/63297b73-d468-404f-8ee0-672bad8f88ea-kube-api-access-8z7hc\") on node \"crc\" DevicePath \"\"" Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.842355 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:43:29 crc kubenswrapper[4881]: I0126 13:43:29.842382 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63297b73-d468-404f-8ee0-672bad8f88ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.068905 4881 generic.go:334] "Generic (PLEG): container finished" podID="63297b73-d468-404f-8ee0-672bad8f88ea" containerID="a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db" exitCode=0 Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.068977 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pl2k" event={"ID":"63297b73-d468-404f-8ee0-672bad8f88ea","Type":"ContainerDied","Data":"a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db"} Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.069032 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pl2k" event={"ID":"63297b73-d468-404f-8ee0-672bad8f88ea","Type":"ContainerDied","Data":"f072cde21175a1212f9c51d5b3c009d8a72f297424ea28d87452697a1bb37b47"} Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.069073 4881 scope.go:117] "RemoveContainer" containerID="a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db" Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.069378 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pl2k" Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.128319 4881 scope.go:117] "RemoveContainer" containerID="63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b" Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.130044 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pl2k"] Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.140890 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pl2k"] Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.153101 4881 scope.go:117] "RemoveContainer" containerID="49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea" Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.228395 4881 scope.go:117] "RemoveContainer" containerID="a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db" Jan 26 13:43:30 crc kubenswrapper[4881]: E0126 13:43:30.228881 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db\": container with ID starting with a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db not found: ID does not exist" containerID="a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db" Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.228925 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db"} err="failed to get container status \"a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db\": rpc error: code = NotFound desc = could not find container \"a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db\": container with ID starting with a30652df843cc9b1dc00e495f07ea2c3129d97a5dd45d748c5afba145d7093db not found: ID does not exist" Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.228956 4881 scope.go:117] "RemoveContainer" containerID="63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b" Jan 26 13:43:30 crc kubenswrapper[4881]: E0126 13:43:30.229640 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b\": container with ID starting with 63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b not found: ID does not exist" containerID="63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b" Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.229669 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b"} err="failed to get container status \"63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b\": rpc error: code = NotFound desc = could not find container \"63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b\": container with ID starting with 63d94ded601da7e0ed58f47dfe9ffbbc27aa876a31f3d818cb5233ff4c8b4a7b not found: ID does not exist" Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.229700 4881 scope.go:117] "RemoveContainer" containerID="49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea" Jan 26 13:43:30 crc kubenswrapper[4881]: E0126 13:43:30.230302 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea\": container with ID starting with 49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea not found: ID does not exist" containerID="49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea" Jan 26 13:43:30 crc kubenswrapper[4881]: I0126 13:43:30.230327 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea"} err="failed to get container status \"49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea\": rpc error: code = NotFound desc = could not find container \"49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea\": container with ID starting with 49db2e4a8298a9dba2f1272428c333035acc3b99735c87579f1351398f8e94ea not found: ID does not exist" Jan 26 13:43:30 crc kubenswrapper[4881]: E0126 13:43:30.243080 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63297b73_d468_404f_8ee0_672bad8f88ea.slice/crio-f072cde21175a1212f9c51d5b3c009d8a72f297424ea28d87452697a1bb37b47\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63297b73_d468_404f_8ee0_672bad8f88ea.slice\": RecentStats: unable to find data in memory cache]" Jan 26 13:43:32 crc kubenswrapper[4881]: I0126 13:43:32.095546 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63297b73-d468-404f-8ee0-672bad8f88ea" path="/var/lib/kubelet/pods/63297b73-d468-404f-8ee0-672bad8f88ea/volumes" Jan 26 13:43:37 crc kubenswrapper[4881]: I0126 13:43:37.082976 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:43:37 crc kubenswrapper[4881]: E0126 13:43:37.084553 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:43:52 crc kubenswrapper[4881]: I0126 13:43:52.083385 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:43:52 crc kubenswrapper[4881]: E0126 13:43:52.084498 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:44:05 crc kubenswrapper[4881]: I0126 13:44:05.083148 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:44:05 crc kubenswrapper[4881]: E0126 13:44:05.084255 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:44:19 crc kubenswrapper[4881]: I0126 13:44:19.083014 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:44:19 crc kubenswrapper[4881]: E0126 13:44:19.084369 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:44:34 crc kubenswrapper[4881]: I0126 13:44:34.083241 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:44:34 crc kubenswrapper[4881]: E0126 13:44:34.084169 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:44:47 crc kubenswrapper[4881]: I0126 13:44:47.083714 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:44:47 crc kubenswrapper[4881]: E0126 13:44:47.084836 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.212239 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4"] Jan 26 13:45:00 crc kubenswrapper[4881]: E0126 13:45:00.213434 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63297b73-d468-404f-8ee0-672bad8f88ea" containerName="extract-content" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.213456 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="63297b73-d468-404f-8ee0-672bad8f88ea" containerName="extract-content" Jan 26 13:45:00 crc kubenswrapper[4881]: E0126 13:45:00.213499 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63297b73-d468-404f-8ee0-672bad8f88ea" containerName="registry-server" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.213511 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="63297b73-d468-404f-8ee0-672bad8f88ea" containerName="registry-server" Jan 26 13:45:00 crc kubenswrapper[4881]: E0126 13:45:00.213553 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63297b73-d468-404f-8ee0-672bad8f88ea" containerName="extract-utilities" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.213567 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="63297b73-d468-404f-8ee0-672bad8f88ea" containerName="extract-utilities" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.213892 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="63297b73-d468-404f-8ee0-672bad8f88ea" containerName="registry-server" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.215076 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.220059 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.220158 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.248404 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4"] Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.360620 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9479d57c-d338-4b7c-aeb1-831795cd103d-secret-volume\") pod \"collect-profiles-29490585-hrdd4\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.360940 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9479d57c-d338-4b7c-aeb1-831795cd103d-config-volume\") pod \"collect-profiles-29490585-hrdd4\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.361301 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dksmf\" (UniqueName: \"kubernetes.io/projected/9479d57c-d338-4b7c-aeb1-831795cd103d-kube-api-access-dksmf\") pod \"collect-profiles-29490585-hrdd4\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.463442 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dksmf\" (UniqueName: \"kubernetes.io/projected/9479d57c-d338-4b7c-aeb1-831795cd103d-kube-api-access-dksmf\") pod \"collect-profiles-29490585-hrdd4\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.463506 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9479d57c-d338-4b7c-aeb1-831795cd103d-secret-volume\") pod \"collect-profiles-29490585-hrdd4\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.463598 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9479d57c-d338-4b7c-aeb1-831795cd103d-config-volume\") pod \"collect-profiles-29490585-hrdd4\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.465049 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9479d57c-d338-4b7c-aeb1-831795cd103d-config-volume\") pod \"collect-profiles-29490585-hrdd4\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.471365 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9479d57c-d338-4b7c-aeb1-831795cd103d-secret-volume\") pod \"collect-profiles-29490585-hrdd4\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.484976 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dksmf\" (UniqueName: \"kubernetes.io/projected/9479d57c-d338-4b7c-aeb1-831795cd103d-kube-api-access-dksmf\") pod \"collect-profiles-29490585-hrdd4\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:00 crc kubenswrapper[4881]: I0126 13:45:00.551180 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:01 crc kubenswrapper[4881]: I0126 13:45:01.047548 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4"] Jan 26 13:45:02 crc kubenswrapper[4881]: I0126 13:45:02.058502 4881 generic.go:334] "Generic (PLEG): container finished" podID="9479d57c-d338-4b7c-aeb1-831795cd103d" containerID="27bed579c2cfdc8541d6dfcbeaba9b1aa55176269933d5c1ad6ba4f2c9c8ae16" exitCode=0 Jan 26 13:45:02 crc kubenswrapper[4881]: I0126 13:45:02.058833 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" event={"ID":"9479d57c-d338-4b7c-aeb1-831795cd103d","Type":"ContainerDied","Data":"27bed579c2cfdc8541d6dfcbeaba9b1aa55176269933d5c1ad6ba4f2c9c8ae16"} Jan 26 13:45:02 crc kubenswrapper[4881]: I0126 13:45:02.058862 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" event={"ID":"9479d57c-d338-4b7c-aeb1-831795cd103d","Type":"ContainerStarted","Data":"aa2088d1e10a1ca9fc7589790b0331c2e0910b4d8eafe9118b4c9ad64fd4f402"} Jan 26 13:45:02 crc kubenswrapper[4881]: I0126 13:45:02.092259 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.071850 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"ef7fc11d2349d6a3d6f72cb133293d4fdef0970d03bbf001711847e881a41349"} Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.528294 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.627354 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9479d57c-d338-4b7c-aeb1-831795cd103d-config-volume\") pod \"9479d57c-d338-4b7c-aeb1-831795cd103d\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.627448 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9479d57c-d338-4b7c-aeb1-831795cd103d-secret-volume\") pod \"9479d57c-d338-4b7c-aeb1-831795cd103d\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.627493 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dksmf\" (UniqueName: \"kubernetes.io/projected/9479d57c-d338-4b7c-aeb1-831795cd103d-kube-api-access-dksmf\") pod \"9479d57c-d338-4b7c-aeb1-831795cd103d\" (UID: \"9479d57c-d338-4b7c-aeb1-831795cd103d\") " Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.628118 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9479d57c-d338-4b7c-aeb1-831795cd103d-config-volume" (OuterVolumeSpecName: "config-volume") pod "9479d57c-d338-4b7c-aeb1-831795cd103d" (UID: "9479d57c-d338-4b7c-aeb1-831795cd103d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.633278 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9479d57c-d338-4b7c-aeb1-831795cd103d-kube-api-access-dksmf" (OuterVolumeSpecName: "kube-api-access-dksmf") pod "9479d57c-d338-4b7c-aeb1-831795cd103d" (UID: "9479d57c-d338-4b7c-aeb1-831795cd103d"). InnerVolumeSpecName "kube-api-access-dksmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.635079 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9479d57c-d338-4b7c-aeb1-831795cd103d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9479d57c-d338-4b7c-aeb1-831795cd103d" (UID: "9479d57c-d338-4b7c-aeb1-831795cd103d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.729977 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9479d57c-d338-4b7c-aeb1-831795cd103d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.730032 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9479d57c-d338-4b7c-aeb1-831795cd103d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:03 crc kubenswrapper[4881]: I0126 13:45:03.730050 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dksmf\" (UniqueName: \"kubernetes.io/projected/9479d57c-d338-4b7c-aeb1-831795cd103d-kube-api-access-dksmf\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:04 crc kubenswrapper[4881]: I0126 13:45:04.084260 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" Jan 26 13:45:04 crc kubenswrapper[4881]: I0126 13:45:04.093256 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4" event={"ID":"9479d57c-d338-4b7c-aeb1-831795cd103d","Type":"ContainerDied","Data":"aa2088d1e10a1ca9fc7589790b0331c2e0910b4d8eafe9118b4c9ad64fd4f402"} Jan 26 13:45:04 crc kubenswrapper[4881]: I0126 13:45:04.093297 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa2088d1e10a1ca9fc7589790b0331c2e0910b4d8eafe9118b4c9ad64fd4f402" Jan 26 13:45:04 crc kubenswrapper[4881]: I0126 13:45:04.617273 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f"] Jan 26 13:45:04 crc kubenswrapper[4881]: I0126 13:45:04.629718 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490540-xpj9f"] Jan 26 13:45:06 crc kubenswrapper[4881]: I0126 13:45:06.092388 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d409655-cc9c-41d5-81b5-c93d256f63a7" path="/var/lib/kubelet/pods/6d409655-cc9c-41d5-81b5-c93d256f63a7/volumes" Jan 26 13:45:19 crc kubenswrapper[4881]: I0126 13:45:19.710887 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="32ed51d8-b401-412f-925e-0cff27777e55" containerName="galera" probeResult="failure" output="command timed out" Jan 26 13:45:19 crc kubenswrapper[4881]: I0126 13:45:19.711384 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="32ed51d8-b401-412f-925e-0cff27777e55" containerName="galera" probeResult="failure" output="command timed out" Jan 26 13:45:56 crc kubenswrapper[4881]: I0126 13:45:56.543372 4881 scope.go:117] "RemoveContainer" containerID="b859685fedd089c047b2df9e5034658f73a5946765d70777f13a64d73a906bdf" Jan 26 13:47:24 crc kubenswrapper[4881]: I0126 13:47:24.789901 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:47:24 crc kubenswrapper[4881]: I0126 13:47:24.790464 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:47:29 crc kubenswrapper[4881]: I0126 13:47:29.710858 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="32ed51d8-b401-412f-925e-0cff27777e55" containerName="galera" probeResult="failure" output="command timed out" Jan 26 13:47:29 crc kubenswrapper[4881]: I0126 13:47:29.710928 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="32ed51d8-b401-412f-925e-0cff27777e55" containerName="galera" probeResult="failure" output="command timed out" Jan 26 13:47:54 crc kubenswrapper[4881]: I0126 13:47:54.790425 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:47:54 crc kubenswrapper[4881]: I0126 13:47:54.792241 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:47:59 crc kubenswrapper[4881]: I0126 13:47:59.717046 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="32ed51d8-b401-412f-925e-0cff27777e55" containerName="galera" probeResult="failure" output="command timed out" Jan 26 13:48:24 crc kubenswrapper[4881]: I0126 13:48:24.789577 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:48:24 crc kubenswrapper[4881]: I0126 13:48:24.790027 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:48:24 crc kubenswrapper[4881]: I0126 13:48:24.790077 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 13:48:24 crc kubenswrapper[4881]: I0126 13:48:24.790945 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef7fc11d2349d6a3d6f72cb133293d4fdef0970d03bbf001711847e881a41349"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:48:24 crc kubenswrapper[4881]: I0126 13:48:24.791020 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://ef7fc11d2349d6a3d6f72cb133293d4fdef0970d03bbf001711847e881a41349" gracePeriod=600 Jan 26 13:48:25 crc kubenswrapper[4881]: I0126 13:48:25.696419 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="ef7fc11d2349d6a3d6f72cb133293d4fdef0970d03bbf001711847e881a41349" exitCode=0 Jan 26 13:48:25 crc kubenswrapper[4881]: I0126 13:48:25.696539 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"ef7fc11d2349d6a3d6f72cb133293d4fdef0970d03bbf001711847e881a41349"} Jan 26 13:48:25 crc kubenswrapper[4881]: I0126 13:48:25.697225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2"} Jan 26 13:48:25 crc kubenswrapper[4881]: I0126 13:48:25.697274 4881 scope.go:117] "RemoveContainer" containerID="f2d5a02d29708f3d9e2c7109a72699fcc0b90eeb26f42ca424e6402bc02e962c" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.282269 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vjl86"] Jan 26 13:50:20 crc kubenswrapper[4881]: E0126 13:50:20.283622 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9479d57c-d338-4b7c-aeb1-831795cd103d" containerName="collect-profiles" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.283647 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9479d57c-d338-4b7c-aeb1-831795cd103d" containerName="collect-profiles" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.284017 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9479d57c-d338-4b7c-aeb1-831795cd103d" containerName="collect-profiles" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.286565 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.310171 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vjl86"] Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.348890 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqmh\" (UniqueName: \"kubernetes.io/projected/ff99eb81-d4ad-4a91-a12f-168925f81b97-kube-api-access-cwqmh\") pod \"community-operators-vjl86\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.348928 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-catalog-content\") pod \"community-operators-vjl86\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.349044 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-utilities\") pod \"community-operators-vjl86\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.451091 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-utilities\") pod \"community-operators-vjl86\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.451248 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwqmh\" (UniqueName: \"kubernetes.io/projected/ff99eb81-d4ad-4a91-a12f-168925f81b97-kube-api-access-cwqmh\") pod \"community-operators-vjl86\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.451275 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-catalog-content\") pod \"community-operators-vjl86\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.451876 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-catalog-content\") pod \"community-operators-vjl86\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.452163 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-utilities\") pod \"community-operators-vjl86\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.476473 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwqmh\" (UniqueName: \"kubernetes.io/projected/ff99eb81-d4ad-4a91-a12f-168925f81b97-kube-api-access-cwqmh\") pod \"community-operators-vjl86\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:20 crc kubenswrapper[4881]: I0126 13:50:20.636976 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:21 crc kubenswrapper[4881]: I0126 13:50:21.181004 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vjl86"] Jan 26 13:50:21 crc kubenswrapper[4881]: W0126 13:50:21.184598 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff99eb81_d4ad_4a91_a12f_168925f81b97.slice/crio-42cb060aabdc5bcf398c0d3fa80e4ea8309278200b8f2e8ec02c572ba7dc2094 WatchSource:0}: Error finding container 42cb060aabdc5bcf398c0d3fa80e4ea8309278200b8f2e8ec02c572ba7dc2094: Status 404 returned error can't find the container with id 42cb060aabdc5bcf398c0d3fa80e4ea8309278200b8f2e8ec02c572ba7dc2094 Jan 26 13:50:22 crc kubenswrapper[4881]: I0126 13:50:22.014467 4881 generic.go:334] "Generic (PLEG): container finished" podID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerID="7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45" exitCode=0 Jan 26 13:50:22 crc kubenswrapper[4881]: I0126 13:50:22.014559 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjl86" event={"ID":"ff99eb81-d4ad-4a91-a12f-168925f81b97","Type":"ContainerDied","Data":"7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45"} Jan 26 13:50:22 crc kubenswrapper[4881]: I0126 13:50:22.015052 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjl86" event={"ID":"ff99eb81-d4ad-4a91-a12f-168925f81b97","Type":"ContainerStarted","Data":"42cb060aabdc5bcf398c0d3fa80e4ea8309278200b8f2e8ec02c572ba7dc2094"} Jan 26 13:50:22 crc kubenswrapper[4881]: I0126 13:50:22.018131 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:50:25 crc kubenswrapper[4881]: I0126 13:50:25.049261 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjl86" event={"ID":"ff99eb81-d4ad-4a91-a12f-168925f81b97","Type":"ContainerStarted","Data":"2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c"} Jan 26 13:50:27 crc kubenswrapper[4881]: I0126 13:50:27.073751 4881 generic.go:334] "Generic (PLEG): container finished" podID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerID="2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c" exitCode=0 Jan 26 13:50:27 crc kubenswrapper[4881]: I0126 13:50:27.073849 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjl86" event={"ID":"ff99eb81-d4ad-4a91-a12f-168925f81b97","Type":"ContainerDied","Data":"2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c"} Jan 26 13:50:29 crc kubenswrapper[4881]: I0126 13:50:29.098583 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjl86" event={"ID":"ff99eb81-d4ad-4a91-a12f-168925f81b97","Type":"ContainerStarted","Data":"27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21"} Jan 26 13:50:29 crc kubenswrapper[4881]: I0126 13:50:29.123454 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vjl86" podStartSLOduration=3.638922761 podStartE2EDuration="9.123435248s" podCreationTimestamp="2026-01-26 13:50:20 +0000 UTC" firstStartedPulling="2026-01-26 13:50:22.017777251 +0000 UTC m=+4494.497087297" lastFinishedPulling="2026-01-26 13:50:27.502289758 +0000 UTC m=+4499.981599784" observedRunningTime="2026-01-26 13:50:29.113831834 +0000 UTC m=+4501.593141900" watchObservedRunningTime="2026-01-26 13:50:29.123435248 +0000 UTC m=+4501.602745274" Jan 26 13:50:30 crc kubenswrapper[4881]: I0126 13:50:30.637753 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:30 crc kubenswrapper[4881]: I0126 13:50:30.638459 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:31 crc kubenswrapper[4881]: I0126 13:50:31.689873 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vjl86" podUID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerName="registry-server" probeResult="failure" output=< Jan 26 13:50:31 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 13:50:31 crc kubenswrapper[4881]: > Jan 26 13:50:40 crc kubenswrapper[4881]: I0126 13:50:40.724814 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:40 crc kubenswrapper[4881]: I0126 13:50:40.798650 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:40 crc kubenswrapper[4881]: I0126 13:50:40.968700 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vjl86"] Jan 26 13:50:42 crc kubenswrapper[4881]: I0126 13:50:42.271623 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vjl86" podUID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerName="registry-server" containerID="cri-o://27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21" gracePeriod=2 Jan 26 13:50:42 crc kubenswrapper[4881]: I0126 13:50:42.835962 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:42 crc kubenswrapper[4881]: I0126 13:50:42.915425 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwqmh\" (UniqueName: \"kubernetes.io/projected/ff99eb81-d4ad-4a91-a12f-168925f81b97-kube-api-access-cwqmh\") pod \"ff99eb81-d4ad-4a91-a12f-168925f81b97\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " Jan 26 13:50:42 crc kubenswrapper[4881]: I0126 13:50:42.915581 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-utilities\") pod \"ff99eb81-d4ad-4a91-a12f-168925f81b97\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " Jan 26 13:50:42 crc kubenswrapper[4881]: I0126 13:50:42.915832 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-catalog-content\") pod \"ff99eb81-d4ad-4a91-a12f-168925f81b97\" (UID: \"ff99eb81-d4ad-4a91-a12f-168925f81b97\") " Jan 26 13:50:42 crc kubenswrapper[4881]: I0126 13:50:42.916208 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-utilities" (OuterVolumeSpecName: "utilities") pod "ff99eb81-d4ad-4a91-a12f-168925f81b97" (UID: "ff99eb81-d4ad-4a91-a12f-168925f81b97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:50:42 crc kubenswrapper[4881]: I0126 13:50:42.916945 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:50:42 crc kubenswrapper[4881]: I0126 13:50:42.921457 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff99eb81-d4ad-4a91-a12f-168925f81b97-kube-api-access-cwqmh" (OuterVolumeSpecName: "kube-api-access-cwqmh") pod "ff99eb81-d4ad-4a91-a12f-168925f81b97" (UID: "ff99eb81-d4ad-4a91-a12f-168925f81b97"). InnerVolumeSpecName "kube-api-access-cwqmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:50:42 crc kubenswrapper[4881]: I0126 13:50:42.966729 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff99eb81-d4ad-4a91-a12f-168925f81b97" (UID: "ff99eb81-d4ad-4a91-a12f-168925f81b97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.018804 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff99eb81-d4ad-4a91-a12f-168925f81b97-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.018839 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwqmh\" (UniqueName: \"kubernetes.io/projected/ff99eb81-d4ad-4a91-a12f-168925f81b97-kube-api-access-cwqmh\") on node \"crc\" DevicePath \"\"" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.288664 4881 generic.go:334] "Generic (PLEG): container finished" podID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerID="27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21" exitCode=0 Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.288750 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjl86" event={"ID":"ff99eb81-d4ad-4a91-a12f-168925f81b97","Type":"ContainerDied","Data":"27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21"} Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.288760 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjl86" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.289254 4881 scope.go:117] "RemoveContainer" containerID="27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.289049 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjl86" event={"ID":"ff99eb81-d4ad-4a91-a12f-168925f81b97","Type":"ContainerDied","Data":"42cb060aabdc5bcf398c0d3fa80e4ea8309278200b8f2e8ec02c572ba7dc2094"} Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.321910 4881 scope.go:117] "RemoveContainer" containerID="2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.357228 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vjl86"] Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.371824 4881 scope.go:117] "RemoveContainer" containerID="7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.372682 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vjl86"] Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.420951 4881 scope.go:117] "RemoveContainer" containerID="27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21" Jan 26 13:50:43 crc kubenswrapper[4881]: E0126 13:50:43.421553 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21\": container with ID starting with 27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21 not found: ID does not exist" containerID="27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.421596 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21"} err="failed to get container status \"27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21\": rpc error: code = NotFound desc = could not find container \"27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21\": container with ID starting with 27c4fa320f495e500fdbe3313368c62d2bf4e19eea4afd26a7afe76bc6775d21 not found: ID does not exist" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.421629 4881 scope.go:117] "RemoveContainer" containerID="2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c" Jan 26 13:50:43 crc kubenswrapper[4881]: E0126 13:50:43.422092 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c\": container with ID starting with 2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c not found: ID does not exist" containerID="2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.422296 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c"} err="failed to get container status \"2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c\": rpc error: code = NotFound desc = could not find container \"2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c\": container with ID starting with 2a5128ec458003135f8952adb2b6c36267174bfe3befa5d78ffa55005b5fdc5c not found: ID does not exist" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.422486 4881 scope.go:117] "RemoveContainer" containerID="7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45" Jan 26 13:50:43 crc kubenswrapper[4881]: E0126 13:50:43.423209 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45\": container with ID starting with 7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45 not found: ID does not exist" containerID="7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45" Jan 26 13:50:43 crc kubenswrapper[4881]: I0126 13:50:43.423235 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45"} err="failed to get container status \"7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45\": rpc error: code = NotFound desc = could not find container \"7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45\": container with ID starting with 7150ea78711cd892a29d540d6f28b9a3435ec01cbd845ce728a9d5acb8771c45 not found: ID does not exist" Jan 26 13:50:44 crc kubenswrapper[4881]: I0126 13:50:44.105630 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff99eb81-d4ad-4a91-a12f-168925f81b97" path="/var/lib/kubelet/pods/ff99eb81-d4ad-4a91-a12f-168925f81b97/volumes" Jan 26 13:50:54 crc kubenswrapper[4881]: I0126 13:50:54.789760 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:50:54 crc kubenswrapper[4881]: I0126 13:50:54.790354 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:51:24 crc kubenswrapper[4881]: I0126 13:51:24.790228 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:51:24 crc kubenswrapper[4881]: I0126 13:51:24.791010 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:51:54 crc kubenswrapper[4881]: I0126 13:51:54.790048 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:51:54 crc kubenswrapper[4881]: I0126 13:51:54.791067 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:51:54 crc kubenswrapper[4881]: I0126 13:51:54.791143 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 13:51:54 crc kubenswrapper[4881]: I0126 13:51:54.792217 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:51:54 crc kubenswrapper[4881]: I0126 13:51:54.792278 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" gracePeriod=600 Jan 26 13:51:54 crc kubenswrapper[4881]: E0126 13:51:54.925465 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:51:55 crc kubenswrapper[4881]: I0126 13:51:55.107428 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" exitCode=0 Jan 26 13:51:55 crc kubenswrapper[4881]: I0126 13:51:55.107685 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2"} Jan 26 13:51:55 crc kubenswrapper[4881]: I0126 13:51:55.107782 4881 scope.go:117] "RemoveContainer" containerID="ef7fc11d2349d6a3d6f72cb133293d4fdef0970d03bbf001711847e881a41349" Jan 26 13:51:55 crc kubenswrapper[4881]: I0126 13:51:55.108449 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:51:55 crc kubenswrapper[4881]: E0126 13:51:55.108902 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:52:07 crc kubenswrapper[4881]: I0126 13:52:07.083276 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:52:07 crc kubenswrapper[4881]: E0126 13:52:07.085717 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:52:22 crc kubenswrapper[4881]: I0126 13:52:22.082814 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:52:22 crc kubenswrapper[4881]: E0126 13:52:22.085406 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:52:37 crc kubenswrapper[4881]: I0126 13:52:37.083621 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:52:37 crc kubenswrapper[4881]: E0126 13:52:37.086086 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:52:52 crc kubenswrapper[4881]: I0126 13:52:52.082644 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:52:52 crc kubenswrapper[4881]: E0126 13:52:52.083471 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:53:05 crc kubenswrapper[4881]: I0126 13:53:05.083297 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:53:05 crc kubenswrapper[4881]: E0126 13:53:05.084480 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:53:16 crc kubenswrapper[4881]: I0126 13:53:16.082336 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:53:16 crc kubenswrapper[4881]: E0126 13:53:16.083170 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.636974 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prfl2"] Jan 26 13:53:26 crc kubenswrapper[4881]: E0126 13:53:26.638001 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerName="registry-server" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.638019 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerName="registry-server" Jan 26 13:53:26 crc kubenswrapper[4881]: E0126 13:53:26.638032 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerName="extract-utilities" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.638041 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerName="extract-utilities" Jan 26 13:53:26 crc kubenswrapper[4881]: E0126 13:53:26.638078 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerName="extract-content" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.638085 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerName="extract-content" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.638326 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff99eb81-d4ad-4a91-a12f-168925f81b97" containerName="registry-server" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.640272 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.666952 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prfl2"] Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.761879 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-str6p\" (UniqueName: \"kubernetes.io/projected/edcd433d-7c48-4d93-ad77-cc0254add7f5-kube-api-access-str6p\") pod \"certified-operators-prfl2\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.761939 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-utilities\") pod \"certified-operators-prfl2\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.762116 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-catalog-content\") pod \"certified-operators-prfl2\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.864163 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-str6p\" (UniqueName: \"kubernetes.io/projected/edcd433d-7c48-4d93-ad77-cc0254add7f5-kube-api-access-str6p\") pod \"certified-operators-prfl2\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.864250 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-utilities\") pod \"certified-operators-prfl2\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.864392 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-catalog-content\") pod \"certified-operators-prfl2\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.865429 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-utilities\") pod \"certified-operators-prfl2\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.865496 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-catalog-content\") pod \"certified-operators-prfl2\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.888499 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-str6p\" (UniqueName: \"kubernetes.io/projected/edcd433d-7c48-4d93-ad77-cc0254add7f5-kube-api-access-str6p\") pod \"certified-operators-prfl2\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:26 crc kubenswrapper[4881]: I0126 13:53:26.965606 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:27 crc kubenswrapper[4881]: I0126 13:53:27.477279 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prfl2"] Jan 26 13:53:28 crc kubenswrapper[4881]: I0126 13:53:28.144160 4881 generic.go:334] "Generic (PLEG): container finished" podID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerID="1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1" exitCode=0 Jan 26 13:53:28 crc kubenswrapper[4881]: I0126 13:53:28.144301 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfl2" event={"ID":"edcd433d-7c48-4d93-ad77-cc0254add7f5","Type":"ContainerDied","Data":"1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1"} Jan 26 13:53:28 crc kubenswrapper[4881]: I0126 13:53:28.144481 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfl2" event={"ID":"edcd433d-7c48-4d93-ad77-cc0254add7f5","Type":"ContainerStarted","Data":"38d0b6442fa914243c595f3d51da64662e42a317fa288786126d2e9aa2605646"} Jan 26 13:53:29 crc kubenswrapper[4881]: I0126 13:53:29.154828 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfl2" event={"ID":"edcd433d-7c48-4d93-ad77-cc0254add7f5","Type":"ContainerStarted","Data":"dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94"} Jan 26 13:53:30 crc kubenswrapper[4881]: I0126 13:53:30.088039 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:53:30 crc kubenswrapper[4881]: E0126 13:53:30.088853 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:53:31 crc kubenswrapper[4881]: I0126 13:53:31.184126 4881 generic.go:334] "Generic (PLEG): container finished" podID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerID="dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94" exitCode=0 Jan 26 13:53:31 crc kubenswrapper[4881]: I0126 13:53:31.184362 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfl2" event={"ID":"edcd433d-7c48-4d93-ad77-cc0254add7f5","Type":"ContainerDied","Data":"dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94"} Jan 26 13:53:33 crc kubenswrapper[4881]: I0126 13:53:33.208261 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfl2" event={"ID":"edcd433d-7c48-4d93-ad77-cc0254add7f5","Type":"ContainerStarted","Data":"e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7"} Jan 26 13:53:33 crc kubenswrapper[4881]: I0126 13:53:33.242845 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prfl2" podStartSLOduration=3.305130521 podStartE2EDuration="7.24282193s" podCreationTimestamp="2026-01-26 13:53:26 +0000 UTC" firstStartedPulling="2026-01-26 13:53:28.146439762 +0000 UTC m=+4680.625749798" lastFinishedPulling="2026-01-26 13:53:32.084131161 +0000 UTC m=+4684.563441207" observedRunningTime="2026-01-26 13:53:33.239163421 +0000 UTC m=+4685.718473487" watchObservedRunningTime="2026-01-26 13:53:33.24282193 +0000 UTC m=+4685.722131996" Jan 26 13:53:36 crc kubenswrapper[4881]: I0126 13:53:36.966377 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:36 crc kubenswrapper[4881]: I0126 13:53:36.967074 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:37 crc kubenswrapper[4881]: I0126 13:53:37.032050 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:37 crc kubenswrapper[4881]: I0126 13:53:37.337202 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:37 crc kubenswrapper[4881]: I0126 13:53:37.400273 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prfl2"] Jan 26 13:53:39 crc kubenswrapper[4881]: I0126 13:53:39.290388 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prfl2" podUID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerName="registry-server" containerID="cri-o://e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7" gracePeriod=2 Jan 26 13:53:39 crc kubenswrapper[4881]: I0126 13:53:39.833301 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.005438 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-str6p\" (UniqueName: \"kubernetes.io/projected/edcd433d-7c48-4d93-ad77-cc0254add7f5-kube-api-access-str6p\") pod \"edcd433d-7c48-4d93-ad77-cc0254add7f5\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.005493 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-utilities\") pod \"edcd433d-7c48-4d93-ad77-cc0254add7f5\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.005588 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-catalog-content\") pod \"edcd433d-7c48-4d93-ad77-cc0254add7f5\" (UID: \"edcd433d-7c48-4d93-ad77-cc0254add7f5\") " Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.014319 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edcd433d-7c48-4d93-ad77-cc0254add7f5-kube-api-access-str6p" (OuterVolumeSpecName: "kube-api-access-str6p") pod "edcd433d-7c48-4d93-ad77-cc0254add7f5" (UID: "edcd433d-7c48-4d93-ad77-cc0254add7f5"). InnerVolumeSpecName "kube-api-access-str6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.025632 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-utilities" (OuterVolumeSpecName: "utilities") pod "edcd433d-7c48-4d93-ad77-cc0254add7f5" (UID: "edcd433d-7c48-4d93-ad77-cc0254add7f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.051673 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edcd433d-7c48-4d93-ad77-cc0254add7f5" (UID: "edcd433d-7c48-4d93-ad77-cc0254add7f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.107648 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-str6p\" (UniqueName: \"kubernetes.io/projected/edcd433d-7c48-4d93-ad77-cc0254add7f5-kube-api-access-str6p\") on node \"crc\" DevicePath \"\"" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.107681 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.107691 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcd433d-7c48-4d93-ad77-cc0254add7f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.301693 4881 generic.go:334] "Generic (PLEG): container finished" podID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerID="e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7" exitCode=0 Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.301735 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfl2" event={"ID":"edcd433d-7c48-4d93-ad77-cc0254add7f5","Type":"ContainerDied","Data":"e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7"} Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.301802 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfl2" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.301821 4881 scope.go:117] "RemoveContainer" containerID="e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.301809 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfl2" event={"ID":"edcd433d-7c48-4d93-ad77-cc0254add7f5","Type":"ContainerDied","Data":"38d0b6442fa914243c595f3d51da64662e42a317fa288786126d2e9aa2605646"} Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.337619 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prfl2"] Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.338597 4881 scope.go:117] "RemoveContainer" containerID="dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.345791 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prfl2"] Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.364003 4881 scope.go:117] "RemoveContainer" containerID="1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.425864 4881 scope.go:117] "RemoveContainer" containerID="e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7" Jan 26 13:53:40 crc kubenswrapper[4881]: E0126 13:53:40.426361 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7\": container with ID starting with e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7 not found: ID does not exist" containerID="e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.426421 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7"} err="failed to get container status \"e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7\": rpc error: code = NotFound desc = could not find container \"e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7\": container with ID starting with e206048e361b5412009219da619f004cfa8b0160eb384534f5f8f7ec093d54f7 not found: ID does not exist" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.426454 4881 scope.go:117] "RemoveContainer" containerID="dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94" Jan 26 13:53:40 crc kubenswrapper[4881]: E0126 13:53:40.426820 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94\": container with ID starting with dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94 not found: ID does not exist" containerID="dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.426853 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94"} err="failed to get container status \"dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94\": rpc error: code = NotFound desc = could not find container \"dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94\": container with ID starting with dde4d133c62b9333f28f2dd1e470f01b5be6f919a52ca6885ede8aea49e90c94 not found: ID does not exist" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.426874 4881 scope.go:117] "RemoveContainer" containerID="1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1" Jan 26 13:53:40 crc kubenswrapper[4881]: E0126 13:53:40.427201 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1\": container with ID starting with 1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1 not found: ID does not exist" containerID="1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1" Jan 26 13:53:40 crc kubenswrapper[4881]: I0126 13:53:40.427242 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1"} err="failed to get container status \"1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1\": rpc error: code = NotFound desc = could not find container \"1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1\": container with ID starting with 1f541eadfdfd3763c35c1787497e2e451181752e9e8d51d8764e86942670fde1 not found: ID does not exist" Jan 26 13:53:42 crc kubenswrapper[4881]: I0126 13:53:42.103621 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edcd433d-7c48-4d93-ad77-cc0254add7f5" path="/var/lib/kubelet/pods/edcd433d-7c48-4d93-ad77-cc0254add7f5/volumes" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.082971 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:53:43 crc kubenswrapper[4881]: E0126 13:53:43.084338 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.500463 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5vkvn"] Jan 26 13:53:43 crc kubenswrapper[4881]: E0126 13:53:43.501034 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerName="registry-server" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.501056 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerName="registry-server" Jan 26 13:53:43 crc kubenswrapper[4881]: E0126 13:53:43.501099 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerName="extract-content" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.501113 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerName="extract-content" Jan 26 13:53:43 crc kubenswrapper[4881]: E0126 13:53:43.501142 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerName="extract-utilities" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.501155 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerName="extract-utilities" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.501502 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="edcd433d-7c48-4d93-ad77-cc0254add7f5" containerName="registry-server" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.503652 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.538668 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5vkvn"] Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.586412 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-catalog-content\") pod \"redhat-operators-5vkvn\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.586497 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqjp6\" (UniqueName: \"kubernetes.io/projected/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-kube-api-access-pqjp6\") pod \"redhat-operators-5vkvn\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.586597 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-utilities\") pod \"redhat-operators-5vkvn\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.688095 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-catalog-content\") pod \"redhat-operators-5vkvn\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.688158 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqjp6\" (UniqueName: \"kubernetes.io/projected/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-kube-api-access-pqjp6\") pod \"redhat-operators-5vkvn\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.688218 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-utilities\") pod \"redhat-operators-5vkvn\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.688661 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-catalog-content\") pod \"redhat-operators-5vkvn\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.688701 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-utilities\") pod \"redhat-operators-5vkvn\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.710554 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqjp6\" (UniqueName: \"kubernetes.io/projected/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-kube-api-access-pqjp6\") pod \"redhat-operators-5vkvn\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:43 crc kubenswrapper[4881]: I0126 13:53:43.869721 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:44 crc kubenswrapper[4881]: I0126 13:53:44.351023 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5vkvn"] Jan 26 13:53:45 crc kubenswrapper[4881]: I0126 13:53:45.363668 4881 generic.go:334] "Generic (PLEG): container finished" podID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerID="0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea" exitCode=0 Jan 26 13:53:45 crc kubenswrapper[4881]: I0126 13:53:45.363900 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vkvn" event={"ID":"d6b1dd4a-a0b1-4a1a-a267-756dc968577d","Type":"ContainerDied","Data":"0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea"} Jan 26 13:53:45 crc kubenswrapper[4881]: I0126 13:53:45.364064 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vkvn" event={"ID":"d6b1dd4a-a0b1-4a1a-a267-756dc968577d","Type":"ContainerStarted","Data":"400e1bff75988ceba09f92a8594ce334bc5dfba5bbd7e3ea6f4616507e1dc4b3"} Jan 26 13:53:45 crc kubenswrapper[4881]: I0126 13:53:45.909157 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s4hhf"] Jan 26 13:53:45 crc kubenswrapper[4881]: I0126 13:53:45.912817 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:45 crc kubenswrapper[4881]: I0126 13:53:45.924026 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4hhf"] Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.038485 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbh7g\" (UniqueName: \"kubernetes.io/projected/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-kube-api-access-tbh7g\") pod \"redhat-marketplace-s4hhf\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.038645 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-catalog-content\") pod \"redhat-marketplace-s4hhf\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.038698 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-utilities\") pod \"redhat-marketplace-s4hhf\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.140905 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbh7g\" (UniqueName: \"kubernetes.io/projected/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-kube-api-access-tbh7g\") pod \"redhat-marketplace-s4hhf\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.141432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-catalog-content\") pod \"redhat-marketplace-s4hhf\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.142391 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-utilities\") pod \"redhat-marketplace-s4hhf\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.142419 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-catalog-content\") pod \"redhat-marketplace-s4hhf\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.141464 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-utilities\") pod \"redhat-marketplace-s4hhf\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.166809 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbh7g\" (UniqueName: \"kubernetes.io/projected/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-kube-api-access-tbh7g\") pod \"redhat-marketplace-s4hhf\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.246504 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:46 crc kubenswrapper[4881]: I0126 13:53:46.800712 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4hhf"] Jan 26 13:53:46 crc kubenswrapper[4881]: W0126 13:53:46.806741 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8c8d6d7_4e96_491b_ac0a_245f23eef6ed.slice/crio-fe82c858ab82ec2ad67222415ce5cb1fdc918b265dc46fbbf1a52e02fd1ecde6 WatchSource:0}: Error finding container fe82c858ab82ec2ad67222415ce5cb1fdc918b265dc46fbbf1a52e02fd1ecde6: Status 404 returned error can't find the container with id fe82c858ab82ec2ad67222415ce5cb1fdc918b265dc46fbbf1a52e02fd1ecde6 Jan 26 13:53:47 crc kubenswrapper[4881]: I0126 13:53:47.401564 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vkvn" event={"ID":"d6b1dd4a-a0b1-4a1a-a267-756dc968577d","Type":"ContainerStarted","Data":"c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f"} Jan 26 13:53:47 crc kubenswrapper[4881]: I0126 13:53:47.404072 4881 generic.go:334] "Generic (PLEG): container finished" podID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerID="ab1e63f4bb2a31167f939826e20efc3784b877325d3e929a715792c6be9ff60c" exitCode=0 Jan 26 13:53:47 crc kubenswrapper[4881]: I0126 13:53:47.404106 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4hhf" event={"ID":"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed","Type":"ContainerDied","Data":"ab1e63f4bb2a31167f939826e20efc3784b877325d3e929a715792c6be9ff60c"} Jan 26 13:53:47 crc kubenswrapper[4881]: I0126 13:53:47.404125 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4hhf" event={"ID":"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed","Type":"ContainerStarted","Data":"fe82c858ab82ec2ad67222415ce5cb1fdc918b265dc46fbbf1a52e02fd1ecde6"} Jan 26 13:53:50 crc kubenswrapper[4881]: I0126 13:53:50.455771 4881 generic.go:334] "Generic (PLEG): container finished" podID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerID="c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f" exitCode=0 Jan 26 13:53:50 crc kubenswrapper[4881]: I0126 13:53:50.455856 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vkvn" event={"ID":"d6b1dd4a-a0b1-4a1a-a267-756dc968577d","Type":"ContainerDied","Data":"c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f"} Jan 26 13:53:50 crc kubenswrapper[4881]: I0126 13:53:50.460704 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4hhf" event={"ID":"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed","Type":"ContainerStarted","Data":"e1af2ad1539e5ce919bf013cfaaec149b79190ef10493c6d964e0361da48f4ce"} Jan 26 13:53:51 crc kubenswrapper[4881]: I0126 13:53:51.474698 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vkvn" event={"ID":"d6b1dd4a-a0b1-4a1a-a267-756dc968577d","Type":"ContainerStarted","Data":"ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56"} Jan 26 13:53:51 crc kubenswrapper[4881]: I0126 13:53:51.477991 4881 generic.go:334] "Generic (PLEG): container finished" podID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerID="e1af2ad1539e5ce919bf013cfaaec149b79190ef10493c6d964e0361da48f4ce" exitCode=0 Jan 26 13:53:51 crc kubenswrapper[4881]: I0126 13:53:51.478029 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4hhf" event={"ID":"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed","Type":"ContainerDied","Data":"e1af2ad1539e5ce919bf013cfaaec149b79190ef10493c6d964e0361da48f4ce"} Jan 26 13:53:51 crc kubenswrapper[4881]: I0126 13:53:51.501836 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5vkvn" podStartSLOduration=2.956673219 podStartE2EDuration="8.501818564s" podCreationTimestamp="2026-01-26 13:53:43 +0000 UTC" firstStartedPulling="2026-01-26 13:53:45.367029458 +0000 UTC m=+4697.846339494" lastFinishedPulling="2026-01-26 13:53:50.912174763 +0000 UTC m=+4703.391484839" observedRunningTime="2026-01-26 13:53:51.492538128 +0000 UTC m=+4703.971848164" watchObservedRunningTime="2026-01-26 13:53:51.501818564 +0000 UTC m=+4703.981128590" Jan 26 13:53:52 crc kubenswrapper[4881]: I0126 13:53:52.489875 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4hhf" event={"ID":"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed","Type":"ContainerStarted","Data":"0733f296e758587cf6c30b2e596b06c1758e55a1143bfd3a39b996c266c11b93"} Jan 26 13:53:52 crc kubenswrapper[4881]: I0126 13:53:52.513447 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s4hhf" podStartSLOduration=3.056563048 podStartE2EDuration="7.513430339s" podCreationTimestamp="2026-01-26 13:53:45 +0000 UTC" firstStartedPulling="2026-01-26 13:53:47.40754659 +0000 UTC m=+4699.886856616" lastFinishedPulling="2026-01-26 13:53:51.864413871 +0000 UTC m=+4704.343723907" observedRunningTime="2026-01-26 13:53:52.511047571 +0000 UTC m=+4704.990357617" watchObservedRunningTime="2026-01-26 13:53:52.513430339 +0000 UTC m=+4704.992740365" Jan 26 13:53:53 crc kubenswrapper[4881]: I0126 13:53:53.870110 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:53 crc kubenswrapper[4881]: I0126 13:53:53.872147 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:53:54 crc kubenswrapper[4881]: I0126 13:53:54.083143 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:53:54 crc kubenswrapper[4881]: E0126 13:53:54.083455 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:53:54 crc kubenswrapper[4881]: I0126 13:53:54.921653 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5vkvn" podUID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerName="registry-server" probeResult="failure" output=< Jan 26 13:53:54 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 13:53:54 crc kubenswrapper[4881]: > Jan 26 13:53:56 crc kubenswrapper[4881]: I0126 13:53:56.247067 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:56 crc kubenswrapper[4881]: I0126 13:53:56.247388 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:53:56 crc kubenswrapper[4881]: I0126 13:53:56.310348 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:54:03 crc kubenswrapper[4881]: I0126 13:54:03.981013 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:54:04 crc kubenswrapper[4881]: I0126 13:54:04.044903 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:54:04 crc kubenswrapper[4881]: I0126 13:54:04.220631 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5vkvn"] Jan 26 13:54:05 crc kubenswrapper[4881]: I0126 13:54:05.634628 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5vkvn" podUID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerName="registry-server" containerID="cri-o://ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56" gracePeriod=2 Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.217731 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.300028 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.407497 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-catalog-content\") pod \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.407647 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-utilities\") pod \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.407781 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqjp6\" (UniqueName: \"kubernetes.io/projected/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-kube-api-access-pqjp6\") pod \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\" (UID: \"d6b1dd4a-a0b1-4a1a-a267-756dc968577d\") " Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.409759 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-utilities" (OuterVolumeSpecName: "utilities") pod "d6b1dd4a-a0b1-4a1a-a267-756dc968577d" (UID: "d6b1dd4a-a0b1-4a1a-a267-756dc968577d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.422783 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-kube-api-access-pqjp6" (OuterVolumeSpecName: "kube-api-access-pqjp6") pod "d6b1dd4a-a0b1-4a1a-a267-756dc968577d" (UID: "d6b1dd4a-a0b1-4a1a-a267-756dc968577d"). InnerVolumeSpecName "kube-api-access-pqjp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.510275 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.510335 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqjp6\" (UniqueName: \"kubernetes.io/projected/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-kube-api-access-pqjp6\") on node \"crc\" DevicePath \"\"" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.518252 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6b1dd4a-a0b1-4a1a-a267-756dc968577d" (UID: "d6b1dd4a-a0b1-4a1a-a267-756dc968577d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.611860 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b1dd4a-a0b1-4a1a-a267-756dc968577d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.646671 4881 generic.go:334] "Generic (PLEG): container finished" podID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerID="ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56" exitCode=0 Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.646720 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vkvn" event={"ID":"d6b1dd4a-a0b1-4a1a-a267-756dc968577d","Type":"ContainerDied","Data":"ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56"} Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.646753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vkvn" event={"ID":"d6b1dd4a-a0b1-4a1a-a267-756dc968577d","Type":"ContainerDied","Data":"400e1bff75988ceba09f92a8594ce334bc5dfba5bbd7e3ea6f4616507e1dc4b3"} Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.646774 4881 scope.go:117] "RemoveContainer" containerID="ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.646770 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vkvn" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.685327 4881 scope.go:117] "RemoveContainer" containerID="c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.690105 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5vkvn"] Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.713216 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5vkvn"] Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.720973 4881 scope.go:117] "RemoveContainer" containerID="0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.751588 4881 scope.go:117] "RemoveContainer" containerID="ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56" Jan 26 13:54:06 crc kubenswrapper[4881]: E0126 13:54:06.752172 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56\": container with ID starting with ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56 not found: ID does not exist" containerID="ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.752205 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56"} err="failed to get container status \"ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56\": rpc error: code = NotFound desc = could not find container \"ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56\": container with ID starting with ca26f1fa9cf8b7bc26376763ff1d7ab7a39c8d852d854b4636be11822dfd7f56 not found: ID does not exist" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.752229 4881 scope.go:117] "RemoveContainer" containerID="c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f" Jan 26 13:54:06 crc kubenswrapper[4881]: E0126 13:54:06.752713 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f\": container with ID starting with c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f not found: ID does not exist" containerID="c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.752766 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f"} err="failed to get container status \"c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f\": rpc error: code = NotFound desc = could not find container \"c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f\": container with ID starting with c83ab710b908bd34f532016110f27751582026a2cb585cc5d6522caeaed08e2f not found: ID does not exist" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.752799 4881 scope.go:117] "RemoveContainer" containerID="0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea" Jan 26 13:54:06 crc kubenswrapper[4881]: E0126 13:54:06.753258 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea\": container with ID starting with 0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea not found: ID does not exist" containerID="0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea" Jan 26 13:54:06 crc kubenswrapper[4881]: I0126 13:54:06.753285 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea"} err="failed to get container status \"0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea\": rpc error: code = NotFound desc = could not find container \"0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea\": container with ID starting with 0393a6c521f31b1d9f4ba8b2b51506ab68f41e06022eb9d65f58b1fcb69e56ea not found: ID does not exist" Jan 26 13:54:08 crc kubenswrapper[4881]: I0126 13:54:08.083772 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:54:08 crc kubenswrapper[4881]: E0126 13:54:08.084618 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:54:08 crc kubenswrapper[4881]: I0126 13:54:08.103499 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" path="/var/lib/kubelet/pods/d6b1dd4a-a0b1-4a1a-a267-756dc968577d/volumes" Jan 26 13:54:08 crc kubenswrapper[4881]: I0126 13:54:08.425468 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4hhf"] Jan 26 13:54:08 crc kubenswrapper[4881]: I0126 13:54:08.425793 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s4hhf" podUID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerName="registry-server" containerID="cri-o://0733f296e758587cf6c30b2e596b06c1758e55a1143bfd3a39b996c266c11b93" gracePeriod=2 Jan 26 13:54:08 crc kubenswrapper[4881]: I0126 13:54:08.682960 4881 generic.go:334] "Generic (PLEG): container finished" podID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerID="0733f296e758587cf6c30b2e596b06c1758e55a1143bfd3a39b996c266c11b93" exitCode=0 Jan 26 13:54:08 crc kubenswrapper[4881]: I0126 13:54:08.683018 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4hhf" event={"ID":"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed","Type":"ContainerDied","Data":"0733f296e758587cf6c30b2e596b06c1758e55a1143bfd3a39b996c266c11b93"} Jan 26 13:54:08 crc kubenswrapper[4881]: I0126 13:54:08.915888 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.069968 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-utilities\") pod \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.070430 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbh7g\" (UniqueName: \"kubernetes.io/projected/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-kube-api-access-tbh7g\") pod \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.070504 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-catalog-content\") pod \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\" (UID: \"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed\") " Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.071059 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-utilities" (OuterVolumeSpecName: "utilities") pod "c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" (UID: "c8c8d6d7-4e96-491b-ac0a-245f23eef6ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.071827 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.081697 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-kube-api-access-tbh7g" (OuterVolumeSpecName: "kube-api-access-tbh7g") pod "c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" (UID: "c8c8d6d7-4e96-491b-ac0a-245f23eef6ed"). InnerVolumeSpecName "kube-api-access-tbh7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.100571 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" (UID: "c8c8d6d7-4e96-491b-ac0a-245f23eef6ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.173960 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbh7g\" (UniqueName: \"kubernetes.io/projected/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-kube-api-access-tbh7g\") on node \"crc\" DevicePath \"\"" Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.173998 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.703974 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4hhf" event={"ID":"c8c8d6d7-4e96-491b-ac0a-245f23eef6ed","Type":"ContainerDied","Data":"fe82c858ab82ec2ad67222415ce5cb1fdc918b265dc46fbbf1a52e02fd1ecde6"} Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.704073 4881 scope.go:117] "RemoveContainer" containerID="0733f296e758587cf6c30b2e596b06c1758e55a1143bfd3a39b996c266c11b93" Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.704109 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4hhf" Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.740985 4881 scope.go:117] "RemoveContainer" containerID="e1af2ad1539e5ce919bf013cfaaec149b79190ef10493c6d964e0361da48f4ce" Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.778280 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4hhf"] Jan 26 13:54:09 crc kubenswrapper[4881]: I0126 13:54:09.795183 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4hhf"] Jan 26 13:54:10 crc kubenswrapper[4881]: I0126 13:54:10.116445 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" path="/var/lib/kubelet/pods/c8c8d6d7-4e96-491b-ac0a-245f23eef6ed/volumes" Jan 26 13:54:10 crc kubenswrapper[4881]: I0126 13:54:10.128609 4881 scope.go:117] "RemoveContainer" containerID="ab1e63f4bb2a31167f939826e20efc3784b877325d3e929a715792c6be9ff60c" Jan 26 13:54:20 crc kubenswrapper[4881]: I0126 13:54:20.083024 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:54:20 crc kubenswrapper[4881]: E0126 13:54:20.083879 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:54:27 crc kubenswrapper[4881]: E0126 13:54:27.542999 4881 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:46380->38.102.83.69:37913: write tcp 38.102.83.69:46380->38.102.83.69:37913: write: connection reset by peer Jan 26 13:54:29 crc kubenswrapper[4881]: E0126 13:54:29.531347 4881 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.69:46428->38.102.83.69:37913: read tcp 38.102.83.69:46428->38.102.83.69:37913: read: connection reset by peer Jan 26 13:54:32 crc kubenswrapper[4881]: I0126 13:54:32.083205 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:54:32 crc kubenswrapper[4881]: E0126 13:54:32.084488 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:54:46 crc kubenswrapper[4881]: I0126 13:54:46.082983 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:54:46 crc kubenswrapper[4881]: E0126 13:54:46.085961 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:55:00 crc kubenswrapper[4881]: I0126 13:55:00.083313 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:55:00 crc kubenswrapper[4881]: E0126 13:55:00.084430 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:55:13 crc kubenswrapper[4881]: I0126 13:55:13.083720 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:55:13 crc kubenswrapper[4881]: E0126 13:55:13.084558 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:55:27 crc kubenswrapper[4881]: I0126 13:55:27.082185 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:55:27 crc kubenswrapper[4881]: E0126 13:55:27.082961 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:55:42 crc kubenswrapper[4881]: I0126 13:55:42.084707 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:55:42 crc kubenswrapper[4881]: E0126 13:55:42.085723 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:55:53 crc kubenswrapper[4881]: I0126 13:55:53.083381 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:55:53 crc kubenswrapper[4881]: E0126 13:55:53.084900 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:55:55 crc kubenswrapper[4881]: I0126 13:55:55.837118 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-55f986558f-qqwqs" podUID="4b3ea251-a4e4-4e4d-a21f-a239f80690e1" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 13:56:06 crc kubenswrapper[4881]: I0126 13:56:06.083270 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:56:06 crc kubenswrapper[4881]: E0126 13:56:06.084136 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:56:18 crc kubenswrapper[4881]: I0126 13:56:18.090015 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:56:18 crc kubenswrapper[4881]: E0126 13:56:18.090893 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:56:29 crc kubenswrapper[4881]: I0126 13:56:29.083689 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:56:29 crc kubenswrapper[4881]: E0126 13:56:29.084705 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:56:44 crc kubenswrapper[4881]: I0126 13:56:44.083566 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:56:44 crc kubenswrapper[4881]: E0126 13:56:44.084416 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 13:56:57 crc kubenswrapper[4881]: I0126 13:56:57.082760 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 13:56:57 crc kubenswrapper[4881]: I0126 13:56:57.488095 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"9a6137bbf715363821ef9d1278543d89cfef3eeaba572a1b0586ed3e78dd1e5f"} Jan 26 13:57:19 crc kubenswrapper[4881]: I0126 13:57:19.711214 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="32ed51d8-b401-412f-925e-0cff27777e55" containerName="galera" probeResult="failure" output="command timed out" Jan 26 13:59:24 crc kubenswrapper[4881]: I0126 13:59:24.789861 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:59:24 crc kubenswrapper[4881]: I0126 13:59:24.790366 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:59:54 crc kubenswrapper[4881]: I0126 13:59:54.789245 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:59:54 crc kubenswrapper[4881]: I0126 13:59:54.790030 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.149797 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z"] Jan 26 14:00:00 crc kubenswrapper[4881]: E0126 14:00:00.150505 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerName="extract-utilities" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.150533 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerName="extract-utilities" Jan 26 14:00:00 crc kubenswrapper[4881]: E0126 14:00:00.150550 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerName="extract-content" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.150555 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerName="extract-content" Jan 26 14:00:00 crc kubenswrapper[4881]: E0126 14:00:00.150571 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerName="extract-utilities" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.150577 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerName="extract-utilities" Jan 26 14:00:00 crc kubenswrapper[4881]: E0126 14:00:00.150597 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerName="extract-content" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.150604 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerName="extract-content" Jan 26 14:00:00 crc kubenswrapper[4881]: E0126 14:00:00.150614 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerName="registry-server" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.150620 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerName="registry-server" Jan 26 14:00:00 crc kubenswrapper[4881]: E0126 14:00:00.150630 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerName="registry-server" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.150635 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerName="registry-server" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.150825 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c8d6d7-4e96-491b-ac0a-245f23eef6ed" containerName="registry-server" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.150835 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b1dd4a-a0b1-4a1a-a267-756dc968577d" containerName="registry-server" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.151452 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.153330 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.155230 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.164349 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z"] Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.284724 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ffa5751-425d-4678-aa0d-279544271e61-config-volume\") pod \"collect-profiles-29490600-kxw9z\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.284799 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ffa5751-425d-4678-aa0d-279544271e61-secret-volume\") pod \"collect-profiles-29490600-kxw9z\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.285026 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7jj\" (UniqueName: \"kubernetes.io/projected/7ffa5751-425d-4678-aa0d-279544271e61-kube-api-access-sb7jj\") pod \"collect-profiles-29490600-kxw9z\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.387385 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7jj\" (UniqueName: \"kubernetes.io/projected/7ffa5751-425d-4678-aa0d-279544271e61-kube-api-access-sb7jj\") pod \"collect-profiles-29490600-kxw9z\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.387455 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ffa5751-425d-4678-aa0d-279544271e61-config-volume\") pod \"collect-profiles-29490600-kxw9z\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.387539 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ffa5751-425d-4678-aa0d-279544271e61-secret-volume\") pod \"collect-profiles-29490600-kxw9z\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.388432 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ffa5751-425d-4678-aa0d-279544271e61-config-volume\") pod \"collect-profiles-29490600-kxw9z\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.393838 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ffa5751-425d-4678-aa0d-279544271e61-secret-volume\") pod \"collect-profiles-29490600-kxw9z\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.408552 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7jj\" (UniqueName: \"kubernetes.io/projected/7ffa5751-425d-4678-aa0d-279544271e61-kube-api-access-sb7jj\") pod \"collect-profiles-29490600-kxw9z\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.482784 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:00 crc kubenswrapper[4881]: I0126 14:00:00.965643 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z"] Jan 26 14:00:01 crc kubenswrapper[4881]: I0126 14:00:01.803619 4881 generic.go:334] "Generic (PLEG): container finished" podID="7ffa5751-425d-4678-aa0d-279544271e61" containerID="30f80f98a98ba37b62db0bd2d3d31682c3807cc611d51805b127e4748b9fa8e6" exitCode=0 Jan 26 14:00:01 crc kubenswrapper[4881]: I0126 14:00:01.803689 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" event={"ID":"7ffa5751-425d-4678-aa0d-279544271e61","Type":"ContainerDied","Data":"30f80f98a98ba37b62db0bd2d3d31682c3807cc611d51805b127e4748b9fa8e6"} Jan 26 14:00:01 crc kubenswrapper[4881]: I0126 14:00:01.805030 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" event={"ID":"7ffa5751-425d-4678-aa0d-279544271e61","Type":"ContainerStarted","Data":"8d5dbaa3b4b82c59f72b5c170e10a46edd1c3cd34407eabdc2d47a5e35deabd2"} Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.167588 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.363496 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ffa5751-425d-4678-aa0d-279544271e61-secret-volume\") pod \"7ffa5751-425d-4678-aa0d-279544271e61\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.363776 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb7jj\" (UniqueName: \"kubernetes.io/projected/7ffa5751-425d-4678-aa0d-279544271e61-kube-api-access-sb7jj\") pod \"7ffa5751-425d-4678-aa0d-279544271e61\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.363992 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ffa5751-425d-4678-aa0d-279544271e61-config-volume\") pod \"7ffa5751-425d-4678-aa0d-279544271e61\" (UID: \"7ffa5751-425d-4678-aa0d-279544271e61\") " Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.364671 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ffa5751-425d-4678-aa0d-279544271e61-config-volume" (OuterVolumeSpecName: "config-volume") pod "7ffa5751-425d-4678-aa0d-279544271e61" (UID: "7ffa5751-425d-4678-aa0d-279544271e61"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.364870 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ffa5751-425d-4678-aa0d-279544271e61-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.372831 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ffa5751-425d-4678-aa0d-279544271e61-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7ffa5751-425d-4678-aa0d-279544271e61" (UID: "7ffa5751-425d-4678-aa0d-279544271e61"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.374740 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ffa5751-425d-4678-aa0d-279544271e61-kube-api-access-sb7jj" (OuterVolumeSpecName: "kube-api-access-sb7jj") pod "7ffa5751-425d-4678-aa0d-279544271e61" (UID: "7ffa5751-425d-4678-aa0d-279544271e61"). InnerVolumeSpecName "kube-api-access-sb7jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.468009 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ffa5751-425d-4678-aa0d-279544271e61-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.468050 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb7jj\" (UniqueName: \"kubernetes.io/projected/7ffa5751-425d-4678-aa0d-279544271e61-kube-api-access-sb7jj\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.823725 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" event={"ID":"7ffa5751-425d-4678-aa0d-279544271e61","Type":"ContainerDied","Data":"8d5dbaa3b4b82c59f72b5c170e10a46edd1c3cd34407eabdc2d47a5e35deabd2"} Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.824081 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d5dbaa3b4b82c59f72b5c170e10a46edd1c3cd34407eabdc2d47a5e35deabd2" Jan 26 14:00:03 crc kubenswrapper[4881]: I0126 14:00:03.823785 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-kxw9z" Jan 26 14:00:04 crc kubenswrapper[4881]: I0126 14:00:04.293602 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp"] Jan 26 14:00:04 crc kubenswrapper[4881]: I0126 14:00:04.303703 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490555-vrkwp"] Jan 26 14:00:06 crc kubenswrapper[4881]: I0126 14:00:06.097089 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f415aaa-154c-4da9-8dd2-a95a009684f6" path="/var/lib/kubelet/pods/9f415aaa-154c-4da9-8dd2-a95a009684f6/volumes" Jan 26 14:00:24 crc kubenswrapper[4881]: I0126 14:00:24.789636 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:00:24 crc kubenswrapper[4881]: I0126 14:00:24.790612 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:00:24 crc kubenswrapper[4881]: I0126 14:00:24.790700 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 14:00:24 crc kubenswrapper[4881]: I0126 14:00:24.791624 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a6137bbf715363821ef9d1278543d89cfef3eeaba572a1b0586ed3e78dd1e5f"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:00:24 crc kubenswrapper[4881]: I0126 14:00:24.791782 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://9a6137bbf715363821ef9d1278543d89cfef3eeaba572a1b0586ed3e78dd1e5f" gracePeriod=600 Jan 26 14:00:25 crc kubenswrapper[4881]: I0126 14:00:25.108246 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="9a6137bbf715363821ef9d1278543d89cfef3eeaba572a1b0586ed3e78dd1e5f" exitCode=0 Jan 26 14:00:25 crc kubenswrapper[4881]: I0126 14:00:25.108330 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"9a6137bbf715363821ef9d1278543d89cfef3eeaba572a1b0586ed3e78dd1e5f"} Jan 26 14:00:25 crc kubenswrapper[4881]: I0126 14:00:25.108627 4881 scope.go:117] "RemoveContainer" containerID="e58488a7a4b2d3d87c3f5812b5df6527294e08d8b777f8290a767507da8c72e2" Jan 26 14:00:26 crc kubenswrapper[4881]: I0126 14:00:26.119465 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609"} Jan 26 14:00:56 crc kubenswrapper[4881]: I0126 14:00:56.984197 4881 scope.go:117] "RemoveContainer" containerID="28ef0bef1ab055753a7041f45fd552de3dd883e17ca26808138dd379b43361a7" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.177393 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490601-x4d85"] Jan 26 14:01:00 crc kubenswrapper[4881]: E0126 14:01:00.178866 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ffa5751-425d-4678-aa0d-279544271e61" containerName="collect-profiles" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.178892 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ffa5751-425d-4678-aa0d-279544271e61" containerName="collect-profiles" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.179254 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ffa5751-425d-4678-aa0d-279544271e61" containerName="collect-profiles" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.180354 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.196450 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490601-x4d85"] Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.221851 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-combined-ca-bundle\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.222103 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-config-data\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.222280 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrfz\" (UniqueName: \"kubernetes.io/projected/ce705aca-2bb5-4314-aa70-a71cc77303d8-kube-api-access-bdrfz\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.222472 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-fernet-keys\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.324458 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-combined-ca-bundle\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.324616 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-config-data\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.324701 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdrfz\" (UniqueName: \"kubernetes.io/projected/ce705aca-2bb5-4314-aa70-a71cc77303d8-kube-api-access-bdrfz\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.324750 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-fernet-keys\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.337680 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-fernet-keys\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.343182 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-combined-ca-bundle\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.344465 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-config-data\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.349564 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdrfz\" (UniqueName: \"kubernetes.io/projected/ce705aca-2bb5-4314-aa70-a71cc77303d8-kube-api-access-bdrfz\") pod \"keystone-cron-29490601-x4d85\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.511083 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:00 crc kubenswrapper[4881]: I0126 14:01:00.981776 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490601-x4d85"] Jan 26 14:01:01 crc kubenswrapper[4881]: I0126 14:01:01.562258 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490601-x4d85" event={"ID":"ce705aca-2bb5-4314-aa70-a71cc77303d8","Type":"ContainerStarted","Data":"7ead99ae5c75996ac65d04637818447ca5b4bd35bbd6d9af125b892a54018982"} Jan 26 14:01:01 crc kubenswrapper[4881]: I0126 14:01:01.562701 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490601-x4d85" event={"ID":"ce705aca-2bb5-4314-aa70-a71cc77303d8","Type":"ContainerStarted","Data":"6c3e29883fdc4d535693edfee49fc66a1f6531f6be0afa28dfdacc1759985787"} Jan 26 14:01:01 crc kubenswrapper[4881]: I0126 14:01:01.583126 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490601-x4d85" podStartSLOduration=1.583106693 podStartE2EDuration="1.583106693s" podCreationTimestamp="2026-01-26 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:01:01.579315052 +0000 UTC m=+5134.058625078" watchObservedRunningTime="2026-01-26 14:01:01.583106693 +0000 UTC m=+5134.062416719" Jan 26 14:01:05 crc kubenswrapper[4881]: I0126 14:01:05.627007 4881 generic.go:334] "Generic (PLEG): container finished" podID="ce705aca-2bb5-4314-aa70-a71cc77303d8" containerID="7ead99ae5c75996ac65d04637818447ca5b4bd35bbd6d9af125b892a54018982" exitCode=0 Jan 26 14:01:05 crc kubenswrapper[4881]: I0126 14:01:05.627113 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490601-x4d85" event={"ID":"ce705aca-2bb5-4314-aa70-a71cc77303d8","Type":"ContainerDied","Data":"7ead99ae5c75996ac65d04637818447ca5b4bd35bbd6d9af125b892a54018982"} Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.039084 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.060653 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-config-data\") pod \"ce705aca-2bb5-4314-aa70-a71cc77303d8\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.060886 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-combined-ca-bundle\") pod \"ce705aca-2bb5-4314-aa70-a71cc77303d8\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.060918 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdrfz\" (UniqueName: \"kubernetes.io/projected/ce705aca-2bb5-4314-aa70-a71cc77303d8-kube-api-access-bdrfz\") pod \"ce705aca-2bb5-4314-aa70-a71cc77303d8\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.060977 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-fernet-keys\") pod \"ce705aca-2bb5-4314-aa70-a71cc77303d8\" (UID: \"ce705aca-2bb5-4314-aa70-a71cc77303d8\") " Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.067074 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ce705aca-2bb5-4314-aa70-a71cc77303d8" (UID: "ce705aca-2bb5-4314-aa70-a71cc77303d8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.067657 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce705aca-2bb5-4314-aa70-a71cc77303d8-kube-api-access-bdrfz" (OuterVolumeSpecName: "kube-api-access-bdrfz") pod "ce705aca-2bb5-4314-aa70-a71cc77303d8" (UID: "ce705aca-2bb5-4314-aa70-a71cc77303d8"). InnerVolumeSpecName "kube-api-access-bdrfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.102552 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce705aca-2bb5-4314-aa70-a71cc77303d8" (UID: "ce705aca-2bb5-4314-aa70-a71cc77303d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.121590 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-config-data" (OuterVolumeSpecName: "config-data") pod "ce705aca-2bb5-4314-aa70-a71cc77303d8" (UID: "ce705aca-2bb5-4314-aa70-a71cc77303d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.162468 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.162509 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdrfz\" (UniqueName: \"kubernetes.io/projected/ce705aca-2bb5-4314-aa70-a71cc77303d8-kube-api-access-bdrfz\") on node \"crc\" DevicePath \"\"" Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.162534 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.162545 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce705aca-2bb5-4314-aa70-a71cc77303d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.649245 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490601-x4d85" event={"ID":"ce705aca-2bb5-4314-aa70-a71cc77303d8","Type":"ContainerDied","Data":"6c3e29883fdc4d535693edfee49fc66a1f6531f6be0afa28dfdacc1759985787"} Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.649293 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c3e29883fdc4d535693edfee49fc66a1f6531f6be0afa28dfdacc1759985787" Jan 26 14:01:07 crc kubenswrapper[4881]: I0126 14:01:07.649427 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490601-x4d85" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.007843 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2wmbk"] Jan 26 14:01:40 crc kubenswrapper[4881]: E0126 14:01:40.008943 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce705aca-2bb5-4314-aa70-a71cc77303d8" containerName="keystone-cron" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.008960 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce705aca-2bb5-4314-aa70-a71cc77303d8" containerName="keystone-cron" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.009151 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce705aca-2bb5-4314-aa70-a71cc77303d8" containerName="keystone-cron" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.010644 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.030155 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2wmbk"] Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.160882 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-utilities\") pod \"community-operators-2wmbk\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.161160 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-catalog-content\") pod \"community-operators-2wmbk\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.161209 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86hgw\" (UniqueName: \"kubernetes.io/projected/6d511048-6fa5-4594-a9c8-bb0aa05b427c-kube-api-access-86hgw\") pod \"community-operators-2wmbk\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.263740 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-utilities\") pod \"community-operators-2wmbk\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.263810 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-catalog-content\") pod \"community-operators-2wmbk\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.263863 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86hgw\" (UniqueName: \"kubernetes.io/projected/6d511048-6fa5-4594-a9c8-bb0aa05b427c-kube-api-access-86hgw\") pod \"community-operators-2wmbk\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.264272 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-utilities\") pod \"community-operators-2wmbk\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.264294 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-catalog-content\") pod \"community-operators-2wmbk\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.297606 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86hgw\" (UniqueName: \"kubernetes.io/projected/6d511048-6fa5-4594-a9c8-bb0aa05b427c-kube-api-access-86hgw\") pod \"community-operators-2wmbk\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.337749 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:01:40 crc kubenswrapper[4881]: I0126 14:01:40.912216 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2wmbk"] Jan 26 14:01:41 crc kubenswrapper[4881]: I0126 14:01:41.039830 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wmbk" event={"ID":"6d511048-6fa5-4594-a9c8-bb0aa05b427c","Type":"ContainerStarted","Data":"46e4cca715b30f395b0e5239903162604cf9aa3172716d03ac90f25643afb911"} Jan 26 14:01:42 crc kubenswrapper[4881]: I0126 14:01:42.053325 4881 generic.go:334] "Generic (PLEG): container finished" podID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerID="cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9" exitCode=0 Jan 26 14:01:42 crc kubenswrapper[4881]: I0126 14:01:42.053369 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wmbk" event={"ID":"6d511048-6fa5-4594-a9c8-bb0aa05b427c","Type":"ContainerDied","Data":"cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9"} Jan 26 14:01:42 crc kubenswrapper[4881]: I0126 14:01:42.055691 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:01:44 crc kubenswrapper[4881]: I0126 14:01:44.097933 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wmbk" event={"ID":"6d511048-6fa5-4594-a9c8-bb0aa05b427c","Type":"ContainerStarted","Data":"2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa"} Jan 26 14:01:45 crc kubenswrapper[4881]: I0126 14:01:45.099887 4881 generic.go:334] "Generic (PLEG): container finished" podID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerID="2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa" exitCode=0 Jan 26 14:01:45 crc kubenswrapper[4881]: I0126 14:01:45.099953 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wmbk" event={"ID":"6d511048-6fa5-4594-a9c8-bb0aa05b427c","Type":"ContainerDied","Data":"2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa"} Jan 26 14:01:52 crc kubenswrapper[4881]: I0126 14:01:52.199782 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wmbk" event={"ID":"6d511048-6fa5-4594-a9c8-bb0aa05b427c","Type":"ContainerStarted","Data":"cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac"} Jan 26 14:01:52 crc kubenswrapper[4881]: I0126 14:01:52.234142 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2wmbk" podStartSLOduration=4.076801202 podStartE2EDuration="13.233733154s" podCreationTimestamp="2026-01-26 14:01:39 +0000 UTC" firstStartedPulling="2026-01-26 14:01:42.055442836 +0000 UTC m=+5174.534752862" lastFinishedPulling="2026-01-26 14:01:51.212374748 +0000 UTC m=+5183.691684814" observedRunningTime="2026-01-26 14:01:52.221404405 +0000 UTC m=+5184.700714431" watchObservedRunningTime="2026-01-26 14:01:52.233733154 +0000 UTC m=+5184.713043220" Jan 26 14:02:00 crc kubenswrapper[4881]: I0126 14:02:00.337958 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:02:00 crc kubenswrapper[4881]: I0126 14:02:00.338361 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:02:00 crc kubenswrapper[4881]: I0126 14:02:00.391722 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:02:01 crc kubenswrapper[4881]: I0126 14:02:01.388105 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:02:01 crc kubenswrapper[4881]: I0126 14:02:01.471938 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2wmbk"] Jan 26 14:02:03 crc kubenswrapper[4881]: I0126 14:02:03.339651 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2wmbk" podUID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerName="registry-server" containerID="cri-o://cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac" gracePeriod=2 Jan 26 14:02:03 crc kubenswrapper[4881]: I0126 14:02:03.827762 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:02:03 crc kubenswrapper[4881]: I0126 14:02:03.925148 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-utilities\") pod \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " Jan 26 14:02:03 crc kubenswrapper[4881]: I0126 14:02:03.925319 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-catalog-content\") pod \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " Jan 26 14:02:03 crc kubenswrapper[4881]: I0126 14:02:03.925416 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86hgw\" (UniqueName: \"kubernetes.io/projected/6d511048-6fa5-4594-a9c8-bb0aa05b427c-kube-api-access-86hgw\") pod \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\" (UID: \"6d511048-6fa5-4594-a9c8-bb0aa05b427c\") " Jan 26 14:02:03 crc kubenswrapper[4881]: I0126 14:02:03.926280 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-utilities" (OuterVolumeSpecName: "utilities") pod "6d511048-6fa5-4594-a9c8-bb0aa05b427c" (UID: "6d511048-6fa5-4594-a9c8-bb0aa05b427c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:02:03 crc kubenswrapper[4881]: I0126 14:02:03.934292 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d511048-6fa5-4594-a9c8-bb0aa05b427c-kube-api-access-86hgw" (OuterVolumeSpecName: "kube-api-access-86hgw") pod "6d511048-6fa5-4594-a9c8-bb0aa05b427c" (UID: "6d511048-6fa5-4594-a9c8-bb0aa05b427c"). InnerVolumeSpecName "kube-api-access-86hgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:02:03 crc kubenswrapper[4881]: I0126 14:02:03.983859 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d511048-6fa5-4594-a9c8-bb0aa05b427c" (UID: "6d511048-6fa5-4594-a9c8-bb0aa05b427c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.031443 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86hgw\" (UniqueName: \"kubernetes.io/projected/6d511048-6fa5-4594-a9c8-bb0aa05b427c-kube-api-access-86hgw\") on node \"crc\" DevicePath \"\"" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.031487 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.031512 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d511048-6fa5-4594-a9c8-bb0aa05b427c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.355679 4881 generic.go:334] "Generic (PLEG): container finished" podID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerID="cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac" exitCode=0 Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.355743 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wmbk" event={"ID":"6d511048-6fa5-4594-a9c8-bb0aa05b427c","Type":"ContainerDied","Data":"cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac"} Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.355782 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wmbk" event={"ID":"6d511048-6fa5-4594-a9c8-bb0aa05b427c","Type":"ContainerDied","Data":"46e4cca715b30f395b0e5239903162604cf9aa3172716d03ac90f25643afb911"} Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.355809 4881 scope.go:117] "RemoveContainer" containerID="cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.356016 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wmbk" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.386442 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2wmbk"] Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.387964 4881 scope.go:117] "RemoveContainer" containerID="2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.404022 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2wmbk"] Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.418852 4881 scope.go:117] "RemoveContainer" containerID="cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.472577 4881 scope.go:117] "RemoveContainer" containerID="cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac" Jan 26 14:02:04 crc kubenswrapper[4881]: E0126 14:02:04.474433 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac\": container with ID starting with cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac not found: ID does not exist" containerID="cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.474662 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac"} err="failed to get container status \"cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac\": rpc error: code = NotFound desc = could not find container \"cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac\": container with ID starting with cca23fc75c6aa53ee39905e25077c1a5ddc035f350ca24caa38b8d6a4975b5ac not found: ID does not exist" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.474730 4881 scope.go:117] "RemoveContainer" containerID="2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa" Jan 26 14:02:04 crc kubenswrapper[4881]: E0126 14:02:04.475476 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa\": container with ID starting with 2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa not found: ID does not exist" containerID="2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.475531 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa"} err="failed to get container status \"2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa\": rpc error: code = NotFound desc = could not find container \"2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa\": container with ID starting with 2d11c524a4ef7c5a4a35fdea4baf8366730bea7c59e5442b7171910bca6d1bfa not found: ID does not exist" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.475558 4881 scope.go:117] "RemoveContainer" containerID="cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9" Jan 26 14:02:04 crc kubenswrapper[4881]: E0126 14:02:04.475948 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9\": container with ID starting with cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9 not found: ID does not exist" containerID="cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9" Jan 26 14:02:04 crc kubenswrapper[4881]: I0126 14:02:04.475972 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9"} err="failed to get container status \"cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9\": rpc error: code = NotFound desc = could not find container \"cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9\": container with ID starting with cba704beaee00211693b0846948e2373a9ebea88c7b68ce5d413531d7bf71aa9 not found: ID does not exist" Jan 26 14:02:06 crc kubenswrapper[4881]: I0126 14:02:06.097017 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" path="/var/lib/kubelet/pods/6d511048-6fa5-4594-a9c8-bb0aa05b427c/volumes" Jan 26 14:02:54 crc kubenswrapper[4881]: I0126 14:02:54.789845 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:02:54 crc kubenswrapper[4881]: I0126 14:02:54.790736 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:03:24 crc kubenswrapper[4881]: I0126 14:03:24.789805 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:03:24 crc kubenswrapper[4881]: I0126 14:03:24.790619 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:03:54 crc kubenswrapper[4881]: I0126 14:03:54.788989 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:03:54 crc kubenswrapper[4881]: I0126 14:03:54.789641 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:03:54 crc kubenswrapper[4881]: I0126 14:03:54.789702 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 14:03:54 crc kubenswrapper[4881]: I0126 14:03:54.790378 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:03:54 crc kubenswrapper[4881]: I0126 14:03:54.790448 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" gracePeriod=600 Jan 26 14:03:54 crc kubenswrapper[4881]: E0126 14:03:54.923742 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:03:55 crc kubenswrapper[4881]: I0126 14:03:55.557420 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" exitCode=0 Jan 26 14:03:55 crc kubenswrapper[4881]: I0126 14:03:55.557462 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609"} Jan 26 14:03:55 crc kubenswrapper[4881]: I0126 14:03:55.557493 4881 scope.go:117] "RemoveContainer" containerID="9a6137bbf715363821ef9d1278543d89cfef3eeaba572a1b0586ed3e78dd1e5f" Jan 26 14:03:55 crc kubenswrapper[4881]: I0126 14:03:55.558145 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:03:55 crc kubenswrapper[4881]: E0126 14:03:55.558439 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.407665 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4ngsx"] Jan 26 14:04:04 crc kubenswrapper[4881]: E0126 14:04:04.409330 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerName="extract-utilities" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.409368 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerName="extract-utilities" Jan 26 14:04:04 crc kubenswrapper[4881]: E0126 14:04:04.409426 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerName="registry-server" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.409445 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerName="registry-server" Jan 26 14:04:04 crc kubenswrapper[4881]: E0126 14:04:04.409498 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerName="extract-content" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.409637 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerName="extract-content" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.410243 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d511048-6fa5-4594-a9c8-bb0aa05b427c" containerName="registry-server" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.413311 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.420340 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ngsx"] Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.485389 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745qw\" (UniqueName: \"kubernetes.io/projected/2c31fefa-8de2-4022-ba05-a3944510233e-kube-api-access-745qw\") pod \"redhat-operators-4ngsx\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.485616 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-catalog-content\") pod \"redhat-operators-4ngsx\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.485799 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-utilities\") pod \"redhat-operators-4ngsx\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.588215 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-catalog-content\") pod \"redhat-operators-4ngsx\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.588277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-utilities\") pod \"redhat-operators-4ngsx\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.588365 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745qw\" (UniqueName: \"kubernetes.io/projected/2c31fefa-8de2-4022-ba05-a3944510233e-kube-api-access-745qw\") pod \"redhat-operators-4ngsx\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.588951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-utilities\") pod \"redhat-operators-4ngsx\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.589062 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-catalog-content\") pod \"redhat-operators-4ngsx\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.611995 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745qw\" (UniqueName: \"kubernetes.io/projected/2c31fefa-8de2-4022-ba05-a3944510233e-kube-api-access-745qw\") pod \"redhat-operators-4ngsx\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:04 crc kubenswrapper[4881]: I0126 14:04:04.774170 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:05 crc kubenswrapper[4881]: I0126 14:04:05.248181 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ngsx"] Jan 26 14:04:05 crc kubenswrapper[4881]: I0126 14:04:05.662046 4881 generic.go:334] "Generic (PLEG): container finished" podID="2c31fefa-8de2-4022-ba05-a3944510233e" containerID="f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72" exitCode=0 Jan 26 14:04:05 crc kubenswrapper[4881]: I0126 14:04:05.662139 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ngsx" event={"ID":"2c31fefa-8de2-4022-ba05-a3944510233e","Type":"ContainerDied","Data":"f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72"} Jan 26 14:04:05 crc kubenswrapper[4881]: I0126 14:04:05.662367 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ngsx" event={"ID":"2c31fefa-8de2-4022-ba05-a3944510233e","Type":"ContainerStarted","Data":"1854af419839474197ea6c7c96d015f8ad8ad361fb35c9ddc6b603afbbcae1e0"} Jan 26 14:04:06 crc kubenswrapper[4881]: I0126 14:04:06.672107 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ngsx" event={"ID":"2c31fefa-8de2-4022-ba05-a3944510233e","Type":"ContainerStarted","Data":"9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976"} Jan 26 14:04:08 crc kubenswrapper[4881]: I0126 14:04:08.089797 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:04:08 crc kubenswrapper[4881]: E0126 14:04:08.090430 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:04:10 crc kubenswrapper[4881]: I0126 14:04:10.719156 4881 generic.go:334] "Generic (PLEG): container finished" podID="2c31fefa-8de2-4022-ba05-a3944510233e" containerID="9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976" exitCode=0 Jan 26 14:04:10 crc kubenswrapper[4881]: I0126 14:04:10.719262 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ngsx" event={"ID":"2c31fefa-8de2-4022-ba05-a3944510233e","Type":"ContainerDied","Data":"9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976"} Jan 26 14:04:11 crc kubenswrapper[4881]: I0126 14:04:11.730757 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ngsx" event={"ID":"2c31fefa-8de2-4022-ba05-a3944510233e","Type":"ContainerStarted","Data":"f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96"} Jan 26 14:04:11 crc kubenswrapper[4881]: I0126 14:04:11.764970 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4ngsx" podStartSLOduration=2.06772229 podStartE2EDuration="7.764947891s" podCreationTimestamp="2026-01-26 14:04:04 +0000 UTC" firstStartedPulling="2026-01-26 14:04:05.663677622 +0000 UTC m=+5318.142987648" lastFinishedPulling="2026-01-26 14:04:11.360903233 +0000 UTC m=+5323.840213249" observedRunningTime="2026-01-26 14:04:11.756000629 +0000 UTC m=+5324.235310695" watchObservedRunningTime="2026-01-26 14:04:11.764947891 +0000 UTC m=+5324.244257927" Jan 26 14:04:14 crc kubenswrapper[4881]: I0126 14:04:14.775245 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:14 crc kubenswrapper[4881]: I0126 14:04:14.775623 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:15 crc kubenswrapper[4881]: I0126 14:04:15.839804 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4ngsx" podUID="2c31fefa-8de2-4022-ba05-a3944510233e" containerName="registry-server" probeResult="failure" output=< Jan 26 14:04:15 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 14:04:15 crc kubenswrapper[4881]: > Jan 26 14:04:20 crc kubenswrapper[4881]: I0126 14:04:20.082899 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:04:20 crc kubenswrapper[4881]: E0126 14:04:20.083653 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:04:24 crc kubenswrapper[4881]: I0126 14:04:24.849072 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:24 crc kubenswrapper[4881]: I0126 14:04:24.915076 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:25 crc kubenswrapper[4881]: I0126 14:04:25.096067 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ngsx"] Jan 26 14:04:26 crc kubenswrapper[4881]: I0126 14:04:26.872508 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4ngsx" podUID="2c31fefa-8de2-4022-ba05-a3944510233e" containerName="registry-server" containerID="cri-o://f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96" gracePeriod=2 Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.431144 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.471965 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-utilities\") pod \"2c31fefa-8de2-4022-ba05-a3944510233e\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.472025 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-catalog-content\") pod \"2c31fefa-8de2-4022-ba05-a3944510233e\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.472166 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-745qw\" (UniqueName: \"kubernetes.io/projected/2c31fefa-8de2-4022-ba05-a3944510233e-kube-api-access-745qw\") pod \"2c31fefa-8de2-4022-ba05-a3944510233e\" (UID: \"2c31fefa-8de2-4022-ba05-a3944510233e\") " Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.472988 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-utilities" (OuterVolumeSpecName: "utilities") pod "2c31fefa-8de2-4022-ba05-a3944510233e" (UID: "2c31fefa-8de2-4022-ba05-a3944510233e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.482761 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c31fefa-8de2-4022-ba05-a3944510233e-kube-api-access-745qw" (OuterVolumeSpecName: "kube-api-access-745qw") pod "2c31fefa-8de2-4022-ba05-a3944510233e" (UID: "2c31fefa-8de2-4022-ba05-a3944510233e"). InnerVolumeSpecName "kube-api-access-745qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.574768 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.575029 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-745qw\" (UniqueName: \"kubernetes.io/projected/2c31fefa-8de2-4022-ba05-a3944510233e-kube-api-access-745qw\") on node \"crc\" DevicePath \"\"" Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.626730 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c31fefa-8de2-4022-ba05-a3944510233e" (UID: "2c31fefa-8de2-4022-ba05-a3944510233e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.677293 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c31fefa-8de2-4022-ba05-a3944510233e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.886920 4881 generic.go:334] "Generic (PLEG): container finished" podID="2c31fefa-8de2-4022-ba05-a3944510233e" containerID="f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96" exitCode=0 Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.886967 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ngsx" event={"ID":"2c31fefa-8de2-4022-ba05-a3944510233e","Type":"ContainerDied","Data":"f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96"} Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.886995 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ngsx" event={"ID":"2c31fefa-8de2-4022-ba05-a3944510233e","Type":"ContainerDied","Data":"1854af419839474197ea6c7c96d015f8ad8ad361fb35c9ddc6b603afbbcae1e0"} Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.887031 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ngsx" Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.887032 4881 scope.go:117] "RemoveContainer" containerID="f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96" Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.921750 4881 scope.go:117] "RemoveContainer" containerID="9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976" Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.940815 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ngsx"] Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.952745 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4ngsx"] Jan 26 14:04:27 crc kubenswrapper[4881]: I0126 14:04:27.955192 4881 scope.go:117] "RemoveContainer" containerID="f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72" Jan 26 14:04:28 crc kubenswrapper[4881]: I0126 14:04:28.022362 4881 scope.go:117] "RemoveContainer" containerID="f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96" Jan 26 14:04:28 crc kubenswrapper[4881]: E0126 14:04:28.022887 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96\": container with ID starting with f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96 not found: ID does not exist" containerID="f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96" Jan 26 14:04:28 crc kubenswrapper[4881]: I0126 14:04:28.022947 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96"} err="failed to get container status \"f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96\": rpc error: code = NotFound desc = could not find container \"f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96\": container with ID starting with f6052b0f9a479ad8993ce501fd064e625b29a21dde535c1fdbbbc60fa0daad96 not found: ID does not exist" Jan 26 14:04:28 crc kubenswrapper[4881]: I0126 14:04:28.022987 4881 scope.go:117] "RemoveContainer" containerID="9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976" Jan 26 14:04:28 crc kubenswrapper[4881]: E0126 14:04:28.023439 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976\": container with ID starting with 9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976 not found: ID does not exist" containerID="9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976" Jan 26 14:04:28 crc kubenswrapper[4881]: I0126 14:04:28.023470 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976"} err="failed to get container status \"9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976\": rpc error: code = NotFound desc = could not find container \"9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976\": container with ID starting with 9ce07ab30230cc5ee514314cc532bc00028126d81eb90c2d167d85628ad67976 not found: ID does not exist" Jan 26 14:04:28 crc kubenswrapper[4881]: I0126 14:04:28.023493 4881 scope.go:117] "RemoveContainer" containerID="f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72" Jan 26 14:04:28 crc kubenswrapper[4881]: E0126 14:04:28.023899 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72\": container with ID starting with f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72 not found: ID does not exist" containerID="f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72" Jan 26 14:04:28 crc kubenswrapper[4881]: I0126 14:04:28.023941 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72"} err="failed to get container status \"f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72\": rpc error: code = NotFound desc = could not find container \"f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72\": container with ID starting with f13be20601d74b221308561fc5ad9cb391558c2e2f6f078e8ba6723c54327d72 not found: ID does not exist" Jan 26 14:04:28 crc kubenswrapper[4881]: I0126 14:04:28.095720 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c31fefa-8de2-4022-ba05-a3944510233e" path="/var/lib/kubelet/pods/2c31fefa-8de2-4022-ba05-a3944510233e/volumes" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.508634 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j49jb"] Jan 26 14:04:30 crc kubenswrapper[4881]: E0126 14:04:30.509596 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c31fefa-8de2-4022-ba05-a3944510233e" containerName="registry-server" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.509611 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c31fefa-8de2-4022-ba05-a3944510233e" containerName="registry-server" Jan 26 14:04:30 crc kubenswrapper[4881]: E0126 14:04:30.509631 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c31fefa-8de2-4022-ba05-a3944510233e" containerName="extract-utilities" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.509639 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c31fefa-8de2-4022-ba05-a3944510233e" containerName="extract-utilities" Jan 26 14:04:30 crc kubenswrapper[4881]: E0126 14:04:30.509650 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c31fefa-8de2-4022-ba05-a3944510233e" containerName="extract-content" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.509658 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c31fefa-8de2-4022-ba05-a3944510233e" containerName="extract-content" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.509925 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c31fefa-8de2-4022-ba05-a3944510233e" containerName="registry-server" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.511778 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.536987 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j49jb"] Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.546138 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7c9k\" (UniqueName: \"kubernetes.io/projected/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-kube-api-access-l7c9k\") pod \"certified-operators-j49jb\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.546383 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-utilities\") pod \"certified-operators-j49jb\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.546472 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-catalog-content\") pod \"certified-operators-j49jb\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.648225 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7c9k\" (UniqueName: \"kubernetes.io/projected/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-kube-api-access-l7c9k\") pod \"certified-operators-j49jb\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.648338 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-utilities\") pod \"certified-operators-j49jb\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.648379 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-catalog-content\") pod \"certified-operators-j49jb\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.648950 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-utilities\") pod \"certified-operators-j49jb\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.648980 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-catalog-content\") pod \"certified-operators-j49jb\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.672093 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7c9k\" (UniqueName: \"kubernetes.io/projected/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-kube-api-access-l7c9k\") pod \"certified-operators-j49jb\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:30 crc kubenswrapper[4881]: I0126 14:04:30.843783 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:31 crc kubenswrapper[4881]: I0126 14:04:31.420920 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j49jb"] Jan 26 14:04:31 crc kubenswrapper[4881]: I0126 14:04:31.952655 4881 generic.go:334] "Generic (PLEG): container finished" podID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerID="015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7" exitCode=0 Jan 26 14:04:31 crc kubenswrapper[4881]: I0126 14:04:31.952728 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j49jb" event={"ID":"690bb4d8-50ad-4429-bbd6-26ec2f709ab0","Type":"ContainerDied","Data":"015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7"} Jan 26 14:04:31 crc kubenswrapper[4881]: I0126 14:04:31.953155 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j49jb" event={"ID":"690bb4d8-50ad-4429-bbd6-26ec2f709ab0","Type":"ContainerStarted","Data":"bfd961139405b982ee44c8f0aef270b745849137452adbb6c93ab6b71e8dd2a0"} Jan 26 14:04:32 crc kubenswrapper[4881]: I0126 14:04:32.083284 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:04:32 crc kubenswrapper[4881]: E0126 14:04:32.083612 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:04:33 crc kubenswrapper[4881]: I0126 14:04:33.981927 4881 generic.go:334] "Generic (PLEG): container finished" podID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerID="08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3" exitCode=0 Jan 26 14:04:33 crc kubenswrapper[4881]: I0126 14:04:33.982570 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j49jb" event={"ID":"690bb4d8-50ad-4429-bbd6-26ec2f709ab0","Type":"ContainerDied","Data":"08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3"} Jan 26 14:04:34 crc kubenswrapper[4881]: I0126 14:04:34.996624 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j49jb" event={"ID":"690bb4d8-50ad-4429-bbd6-26ec2f709ab0","Type":"ContainerStarted","Data":"13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91"} Jan 26 14:04:35 crc kubenswrapper[4881]: I0126 14:04:35.028821 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j49jb" podStartSLOduration=2.60006935 podStartE2EDuration="5.028744632s" podCreationTimestamp="2026-01-26 14:04:30 +0000 UTC" firstStartedPulling="2026-01-26 14:04:31.956606248 +0000 UTC m=+5344.435916284" lastFinishedPulling="2026-01-26 14:04:34.38528154 +0000 UTC m=+5346.864591566" observedRunningTime="2026-01-26 14:04:35.018706864 +0000 UTC m=+5347.498016940" watchObservedRunningTime="2026-01-26 14:04:35.028744632 +0000 UTC m=+5347.508054668" Jan 26 14:04:40 crc kubenswrapper[4881]: I0126 14:04:40.843900 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:40 crc kubenswrapper[4881]: I0126 14:04:40.844383 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:40 crc kubenswrapper[4881]: I0126 14:04:40.935850 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:41 crc kubenswrapper[4881]: I0126 14:04:41.116951 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:41 crc kubenswrapper[4881]: I0126 14:04:41.188946 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j49jb"] Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.081081 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j49jb" podUID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerName="registry-server" containerID="cri-o://13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91" gracePeriod=2 Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.674491 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.752604 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-utilities\") pod \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.752778 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-catalog-content\") pod \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.752944 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7c9k\" (UniqueName: \"kubernetes.io/projected/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-kube-api-access-l7c9k\") pod \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\" (UID: \"690bb4d8-50ad-4429-bbd6-26ec2f709ab0\") " Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.754784 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-utilities" (OuterVolumeSpecName: "utilities") pod "690bb4d8-50ad-4429-bbd6-26ec2f709ab0" (UID: "690bb4d8-50ad-4429-bbd6-26ec2f709ab0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.762311 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-kube-api-access-l7c9k" (OuterVolumeSpecName: "kube-api-access-l7c9k") pod "690bb4d8-50ad-4429-bbd6-26ec2f709ab0" (UID: "690bb4d8-50ad-4429-bbd6-26ec2f709ab0"). InnerVolumeSpecName "kube-api-access-l7c9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.813911 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "690bb4d8-50ad-4429-bbd6-26ec2f709ab0" (UID: "690bb4d8-50ad-4429-bbd6-26ec2f709ab0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.856208 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7c9k\" (UniqueName: \"kubernetes.io/projected/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-kube-api-access-l7c9k\") on node \"crc\" DevicePath \"\"" Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.856247 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:04:43 crc kubenswrapper[4881]: I0126 14:04:43.856259 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690bb4d8-50ad-4429-bbd6-26ec2f709ab0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.098016 4881 generic.go:334] "Generic (PLEG): container finished" podID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerID="13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91" exitCode=0 Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.098236 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j49jb" Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.103990 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j49jb" event={"ID":"690bb4d8-50ad-4429-bbd6-26ec2f709ab0","Type":"ContainerDied","Data":"13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91"} Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.104060 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j49jb" event={"ID":"690bb4d8-50ad-4429-bbd6-26ec2f709ab0","Type":"ContainerDied","Data":"bfd961139405b982ee44c8f0aef270b745849137452adbb6c93ab6b71e8dd2a0"} Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.104085 4881 scope.go:117] "RemoveContainer" containerID="13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91" Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.136841 4881 scope.go:117] "RemoveContainer" containerID="08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3" Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.174107 4881 scope.go:117] "RemoveContainer" containerID="015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7" Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.180989 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j49jb"] Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.194475 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j49jb"] Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.247073 4881 scope.go:117] "RemoveContainer" containerID="13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91" Jan 26 14:04:44 crc kubenswrapper[4881]: E0126 14:04:44.248065 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91\": container with ID starting with 13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91 not found: ID does not exist" containerID="13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91" Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.248143 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91"} err="failed to get container status \"13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91\": rpc error: code = NotFound desc = could not find container \"13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91\": container with ID starting with 13f0acfa475164f5551187108ccb3a082d3414cbd113e567dfb8ad0656f37b91 not found: ID does not exist" Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.248197 4881 scope.go:117] "RemoveContainer" containerID="08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3" Jan 26 14:04:44 crc kubenswrapper[4881]: E0126 14:04:44.249252 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3\": container with ID starting with 08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3 not found: ID does not exist" containerID="08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3" Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.249344 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3"} err="failed to get container status \"08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3\": rpc error: code = NotFound desc = could not find container \"08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3\": container with ID starting with 08ffa665fde6f2e5384833ec7d0b58f1c2d04706120f5609fad4eace89dfe1c3 not found: ID does not exist" Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.249383 4881 scope.go:117] "RemoveContainer" containerID="015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7" Jan 26 14:04:44 crc kubenswrapper[4881]: E0126 14:04:44.250086 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7\": container with ID starting with 015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7 not found: ID does not exist" containerID="015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7" Jan 26 14:04:44 crc kubenswrapper[4881]: I0126 14:04:44.250142 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7"} err="failed to get container status \"015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7\": rpc error: code = NotFound desc = could not find container \"015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7\": container with ID starting with 015e5847ef2648a4d9eb4dac66c65509d1db65e9e2714584c604a5968ac92eb7 not found: ID does not exist" Jan 26 14:04:46 crc kubenswrapper[4881]: I0126 14:04:46.104927 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" path="/var/lib/kubelet/pods/690bb4d8-50ad-4429-bbd6-26ec2f709ab0/volumes" Jan 26 14:04:47 crc kubenswrapper[4881]: I0126 14:04:47.082299 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:04:47 crc kubenswrapper[4881]: E0126 14:04:47.083044 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:05:01 crc kubenswrapper[4881]: I0126 14:05:01.083021 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:05:01 crc kubenswrapper[4881]: E0126 14:05:01.083831 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:05:06 crc kubenswrapper[4881]: I0126 14:05:06.869146 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rfcvs"] Jan 26 14:05:06 crc kubenswrapper[4881]: E0126 14:05:06.871039 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerName="extract-utilities" Jan 26 14:05:06 crc kubenswrapper[4881]: I0126 14:05:06.871158 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerName="extract-utilities" Jan 26 14:05:06 crc kubenswrapper[4881]: E0126 14:05:06.871232 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerName="extract-content" Jan 26 14:05:06 crc kubenswrapper[4881]: I0126 14:05:06.871293 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerName="extract-content" Jan 26 14:05:06 crc kubenswrapper[4881]: E0126 14:05:06.871682 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerName="registry-server" Jan 26 14:05:06 crc kubenswrapper[4881]: I0126 14:05:06.871819 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerName="registry-server" Jan 26 14:05:06 crc kubenswrapper[4881]: I0126 14:05:06.872091 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="690bb4d8-50ad-4429-bbd6-26ec2f709ab0" containerName="registry-server" Jan 26 14:05:06 crc kubenswrapper[4881]: I0126 14:05:06.873644 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:06 crc kubenswrapper[4881]: I0126 14:05:06.881177 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfcvs"] Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.053410 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp4p9\" (UniqueName: \"kubernetes.io/projected/55706d4b-1c8b-4499-ab1c-2e81988704c1-kube-api-access-jp4p9\") pod \"redhat-marketplace-rfcvs\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.053933 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-utilities\") pod \"redhat-marketplace-rfcvs\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.053994 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-catalog-content\") pod \"redhat-marketplace-rfcvs\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.157090 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-utilities\") pod \"redhat-marketplace-rfcvs\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.157209 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-catalog-content\") pod \"redhat-marketplace-rfcvs\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.157490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp4p9\" (UniqueName: \"kubernetes.io/projected/55706d4b-1c8b-4499-ab1c-2e81988704c1-kube-api-access-jp4p9\") pod \"redhat-marketplace-rfcvs\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.158068 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-catalog-content\") pod \"redhat-marketplace-rfcvs\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.158260 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-utilities\") pod \"redhat-marketplace-rfcvs\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.184385 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp4p9\" (UniqueName: \"kubernetes.io/projected/55706d4b-1c8b-4499-ab1c-2e81988704c1-kube-api-access-jp4p9\") pod \"redhat-marketplace-rfcvs\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.195042 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:07 crc kubenswrapper[4881]: I0126 14:05:07.686168 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfcvs"] Jan 26 14:05:08 crc kubenswrapper[4881]: I0126 14:05:08.395816 4881 generic.go:334] "Generic (PLEG): container finished" podID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerID="d266c37712b50230d345ef20766e832feab7204e44864c5d17826caf62ea1638" exitCode=0 Jan 26 14:05:08 crc kubenswrapper[4881]: I0126 14:05:08.395997 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfcvs" event={"ID":"55706d4b-1c8b-4499-ab1c-2e81988704c1","Type":"ContainerDied","Data":"d266c37712b50230d345ef20766e832feab7204e44864c5d17826caf62ea1638"} Jan 26 14:05:08 crc kubenswrapper[4881]: I0126 14:05:08.396192 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfcvs" event={"ID":"55706d4b-1c8b-4499-ab1c-2e81988704c1","Type":"ContainerStarted","Data":"2224ca57253e324ffcf532766e89670ba745407e80c855e9756d05ca0776e3c9"} Jan 26 14:05:09 crc kubenswrapper[4881]: I0126 14:05:09.409256 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfcvs" event={"ID":"55706d4b-1c8b-4499-ab1c-2e81988704c1","Type":"ContainerStarted","Data":"c69a544294bb9e152f3222551a5be32d0c1f8ec72ac5f017dbb995212c371a78"} Jan 26 14:05:10 crc kubenswrapper[4881]: I0126 14:05:10.419945 4881 generic.go:334] "Generic (PLEG): container finished" podID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerID="c69a544294bb9e152f3222551a5be32d0c1f8ec72ac5f017dbb995212c371a78" exitCode=0 Jan 26 14:05:10 crc kubenswrapper[4881]: I0126 14:05:10.419996 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfcvs" event={"ID":"55706d4b-1c8b-4499-ab1c-2e81988704c1","Type":"ContainerDied","Data":"c69a544294bb9e152f3222551a5be32d0c1f8ec72ac5f017dbb995212c371a78"} Jan 26 14:05:13 crc kubenswrapper[4881]: I0126 14:05:13.461307 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfcvs" event={"ID":"55706d4b-1c8b-4499-ab1c-2e81988704c1","Type":"ContainerStarted","Data":"44f554b9acff2f264b00a57a8284c2eed9b1a4fd4161d68c99933ace0dfab12b"} Jan 26 14:05:13 crc kubenswrapper[4881]: I0126 14:05:13.497005 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rfcvs" podStartSLOduration=4.042705659 podStartE2EDuration="7.4969816s" podCreationTimestamp="2026-01-26 14:05:06 +0000 UTC" firstStartedPulling="2026-01-26 14:05:08.399344134 +0000 UTC m=+5380.878654200" lastFinishedPulling="2026-01-26 14:05:11.853620095 +0000 UTC m=+5384.332930141" observedRunningTime="2026-01-26 14:05:13.482824835 +0000 UTC m=+5385.962134851" watchObservedRunningTime="2026-01-26 14:05:13.4969816 +0000 UTC m=+5385.976291636" Jan 26 14:05:14 crc kubenswrapper[4881]: I0126 14:05:14.082802 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:05:14 crc kubenswrapper[4881]: E0126 14:05:14.083661 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:05:17 crc kubenswrapper[4881]: I0126 14:05:17.195905 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:17 crc kubenswrapper[4881]: I0126 14:05:17.196467 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:17 crc kubenswrapper[4881]: I0126 14:05:17.285778 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:17 crc kubenswrapper[4881]: I0126 14:05:17.556094 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:17 crc kubenswrapper[4881]: I0126 14:05:17.623347 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfcvs"] Jan 26 14:05:19 crc kubenswrapper[4881]: I0126 14:05:19.523241 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rfcvs" podUID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerName="registry-server" containerID="cri-o://44f554b9acff2f264b00a57a8284c2eed9b1a4fd4161d68c99933ace0dfab12b" gracePeriod=2 Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.534420 4881 generic.go:334] "Generic (PLEG): container finished" podID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerID="44f554b9acff2f264b00a57a8284c2eed9b1a4fd4161d68c99933ace0dfab12b" exitCode=0 Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.534726 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfcvs" event={"ID":"55706d4b-1c8b-4499-ab1c-2e81988704c1","Type":"ContainerDied","Data":"44f554b9acff2f264b00a57a8284c2eed9b1a4fd4161d68c99933ace0dfab12b"} Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.534753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfcvs" event={"ID":"55706d4b-1c8b-4499-ab1c-2e81988704c1","Type":"ContainerDied","Data":"2224ca57253e324ffcf532766e89670ba745407e80c855e9756d05ca0776e3c9"} Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.534785 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2224ca57253e324ffcf532766e89670ba745407e80c855e9756d05ca0776e3c9" Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.585971 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.714431 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-utilities\") pod \"55706d4b-1c8b-4499-ab1c-2e81988704c1\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.714922 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp4p9\" (UniqueName: \"kubernetes.io/projected/55706d4b-1c8b-4499-ab1c-2e81988704c1-kube-api-access-jp4p9\") pod \"55706d4b-1c8b-4499-ab1c-2e81988704c1\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.715058 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-catalog-content\") pod \"55706d4b-1c8b-4499-ab1c-2e81988704c1\" (UID: \"55706d4b-1c8b-4499-ab1c-2e81988704c1\") " Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.716113 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-utilities" (OuterVolumeSpecName: "utilities") pod "55706d4b-1c8b-4499-ab1c-2e81988704c1" (UID: "55706d4b-1c8b-4499-ab1c-2e81988704c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.720546 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55706d4b-1c8b-4499-ab1c-2e81988704c1-kube-api-access-jp4p9" (OuterVolumeSpecName: "kube-api-access-jp4p9") pod "55706d4b-1c8b-4499-ab1c-2e81988704c1" (UID: "55706d4b-1c8b-4499-ab1c-2e81988704c1"). InnerVolumeSpecName "kube-api-access-jp4p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.744348 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55706d4b-1c8b-4499-ab1c-2e81988704c1" (UID: "55706d4b-1c8b-4499-ab1c-2e81988704c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.817906 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp4p9\" (UniqueName: \"kubernetes.io/projected/55706d4b-1c8b-4499-ab1c-2e81988704c1-kube-api-access-jp4p9\") on node \"crc\" DevicePath \"\"" Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.817942 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:05:20 crc kubenswrapper[4881]: I0126 14:05:20.817954 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55706d4b-1c8b-4499-ab1c-2e81988704c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:05:21 crc kubenswrapper[4881]: I0126 14:05:21.547295 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfcvs" Jan 26 14:05:21 crc kubenswrapper[4881]: I0126 14:05:21.609736 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfcvs"] Jan 26 14:05:21 crc kubenswrapper[4881]: I0126 14:05:21.623819 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfcvs"] Jan 26 14:05:22 crc kubenswrapper[4881]: I0126 14:05:22.108174 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55706d4b-1c8b-4499-ab1c-2e81988704c1" path="/var/lib/kubelet/pods/55706d4b-1c8b-4499-ab1c-2e81988704c1/volumes" Jan 26 14:05:27 crc kubenswrapper[4881]: I0126 14:05:27.082063 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:05:27 crc kubenswrapper[4881]: E0126 14:05:27.082686 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:05:38 crc kubenswrapper[4881]: I0126 14:05:38.098192 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:05:38 crc kubenswrapper[4881]: E0126 14:05:38.100573 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:05:52 crc kubenswrapper[4881]: I0126 14:05:52.083341 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:05:52 crc kubenswrapper[4881]: E0126 14:05:52.084584 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:06:06 crc kubenswrapper[4881]: I0126 14:06:06.083199 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:06:06 crc kubenswrapper[4881]: E0126 14:06:06.084467 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:06:19 crc kubenswrapper[4881]: I0126 14:06:19.083432 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:06:19 crc kubenswrapper[4881]: E0126 14:06:19.084770 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:06:34 crc kubenswrapper[4881]: I0126 14:06:34.082364 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:06:34 crc kubenswrapper[4881]: E0126 14:06:34.083126 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:06:46 crc kubenswrapper[4881]: I0126 14:06:46.083187 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:06:46 crc kubenswrapper[4881]: E0126 14:06:46.084328 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:07:01 crc kubenswrapper[4881]: I0126 14:07:01.083425 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:07:01 crc kubenswrapper[4881]: E0126 14:07:01.084405 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:07:14 crc kubenswrapper[4881]: I0126 14:07:14.083594 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:07:14 crc kubenswrapper[4881]: E0126 14:07:14.085060 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:07:29 crc kubenswrapper[4881]: I0126 14:07:29.082765 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:07:29 crc kubenswrapper[4881]: E0126 14:07:29.083825 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:07:41 crc kubenswrapper[4881]: I0126 14:07:41.084966 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:07:41 crc kubenswrapper[4881]: E0126 14:07:41.085937 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:07:55 crc kubenswrapper[4881]: I0126 14:07:55.083296 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:07:55 crc kubenswrapper[4881]: E0126 14:07:55.084384 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:08:10 crc kubenswrapper[4881]: I0126 14:08:10.083352 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:08:10 crc kubenswrapper[4881]: E0126 14:08:10.084594 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:08:22 crc kubenswrapper[4881]: I0126 14:08:22.083904 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:08:22 crc kubenswrapper[4881]: E0126 14:08:22.085112 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:08:35 crc kubenswrapper[4881]: I0126 14:08:35.084365 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:08:35 crc kubenswrapper[4881]: E0126 14:08:35.085672 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:08:48 crc kubenswrapper[4881]: I0126 14:08:48.090339 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:08:48 crc kubenswrapper[4881]: E0126 14:08:48.091035 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:09:00 crc kubenswrapper[4881]: I0126 14:09:00.082377 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:09:00 crc kubenswrapper[4881]: I0126 14:09:00.569856 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"5d9c212bf85e65e885cc5aa035e7cd2c999fe929b1600de85ef25a70258967d4"} Jan 26 14:11:24 crc kubenswrapper[4881]: I0126 14:11:24.789324 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:11:24 crc kubenswrapper[4881]: I0126 14:11:24.789837 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:11:54 crc kubenswrapper[4881]: I0126 14:11:54.789440 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:11:54 crc kubenswrapper[4881]: I0126 14:11:54.790152 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:11:57 crc kubenswrapper[4881]: I0126 14:11:57.310927 4881 scope.go:117] "RemoveContainer" containerID="c69a544294bb9e152f3222551a5be32d0c1f8ec72ac5f017dbb995212c371a78" Jan 26 14:11:57 crc kubenswrapper[4881]: I0126 14:11:57.347183 4881 scope.go:117] "RemoveContainer" containerID="d266c37712b50230d345ef20766e832feab7204e44864c5d17826caf62ea1638" Jan 26 14:11:57 crc kubenswrapper[4881]: I0126 14:11:57.390060 4881 scope.go:117] "RemoveContainer" containerID="44f554b9acff2f264b00a57a8284c2eed9b1a4fd4161d68c99933ace0dfab12b" Jan 26 14:12:24 crc kubenswrapper[4881]: I0126 14:12:24.788891 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:12:24 crc kubenswrapper[4881]: I0126 14:12:24.789356 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:12:24 crc kubenswrapper[4881]: I0126 14:12:24.789405 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 14:12:24 crc kubenswrapper[4881]: I0126 14:12:24.790250 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d9c212bf85e65e885cc5aa035e7cd2c999fe929b1600de85ef25a70258967d4"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:12:24 crc kubenswrapper[4881]: I0126 14:12:24.790305 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://5d9c212bf85e65e885cc5aa035e7cd2c999fe929b1600de85ef25a70258967d4" gracePeriod=600 Jan 26 14:12:26 crc kubenswrapper[4881]: I0126 14:12:26.647172 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="5d9c212bf85e65e885cc5aa035e7cd2c999fe929b1600de85ef25a70258967d4" exitCode=0 Jan 26 14:12:26 crc kubenswrapper[4881]: I0126 14:12:26.647246 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"5d9c212bf85e65e885cc5aa035e7cd2c999fe929b1600de85ef25a70258967d4"} Jan 26 14:12:26 crc kubenswrapper[4881]: I0126 14:12:26.647630 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b"} Jan 26 14:12:26 crc kubenswrapper[4881]: I0126 14:12:26.647648 4881 scope.go:117] "RemoveContainer" containerID="4cec94ff24a4209ec1e768ddfae44e3df9a99acb09eb578241facbe6c7c87609" Jan 26 14:13:27 crc kubenswrapper[4881]: I0126 14:13:27.348841 4881 generic.go:334] "Generic (PLEG): container finished" podID="fb8ddd97-c952-48e2-b3df-f594646b4377" containerID="464f96f94fb33ebef6daa835a396c5f278ee1d77600993cf680dc17d5309f911" exitCode=0 Jan 26 14:13:27 crc kubenswrapper[4881]: I0126 14:13:27.348942 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fb8ddd97-c952-48e2-b3df-f594646b4377","Type":"ContainerDied","Data":"464f96f94fb33ebef6daa835a396c5f278ee1d77600993cf680dc17d5309f911"} Jan 26 14:13:28 crc kubenswrapper[4881]: I0126 14:13:28.834181 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.013915 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config-secret\") pod \"fb8ddd97-c952-48e2-b3df-f594646b4377\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.013999 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-config-data\") pod \"fb8ddd97-c952-48e2-b3df-f594646b4377\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.014105 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-workdir\") pod \"fb8ddd97-c952-48e2-b3df-f594646b4377\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.014194 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-temporary\") pod \"fb8ddd97-c952-48e2-b3df-f594646b4377\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.014329 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ssh-key\") pod \"fb8ddd97-c952-48e2-b3df-f594646b4377\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.014436 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config\") pod \"fb8ddd97-c952-48e2-b3df-f594646b4377\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.014761 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ca-certs\") pod \"fb8ddd97-c952-48e2-b3df-f594646b4377\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.014843 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5rkf\" (UniqueName: \"kubernetes.io/projected/fb8ddd97-c952-48e2-b3df-f594646b4377-kube-api-access-c5rkf\") pod \"fb8ddd97-c952-48e2-b3df-f594646b4377\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.014886 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"fb8ddd97-c952-48e2-b3df-f594646b4377\" (UID: \"fb8ddd97-c952-48e2-b3df-f594646b4377\") " Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.017410 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fb8ddd97-c952-48e2-b3df-f594646b4377" (UID: "fb8ddd97-c952-48e2-b3df-f594646b4377"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.018990 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-config-data" (OuterVolumeSpecName: "config-data") pod "fb8ddd97-c952-48e2-b3df-f594646b4377" (UID: "fb8ddd97-c952-48e2-b3df-f594646b4377"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.022338 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fb8ddd97-c952-48e2-b3df-f594646b4377" (UID: "fb8ddd97-c952-48e2-b3df-f594646b4377"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.025429 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fb8ddd97-c952-48e2-b3df-f594646b4377" (UID: "fb8ddd97-c952-48e2-b3df-f594646b4377"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.042898 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8ddd97-c952-48e2-b3df-f594646b4377-kube-api-access-c5rkf" (OuterVolumeSpecName: "kube-api-access-c5rkf") pod "fb8ddd97-c952-48e2-b3df-f594646b4377" (UID: "fb8ddd97-c952-48e2-b3df-f594646b4377"). InnerVolumeSpecName "kube-api-access-c5rkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.062317 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fb8ddd97-c952-48e2-b3df-f594646b4377" (UID: "fb8ddd97-c952-48e2-b3df-f594646b4377"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.078452 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fb8ddd97-c952-48e2-b3df-f594646b4377" (UID: "fb8ddd97-c952-48e2-b3df-f594646b4377"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.084443 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fb8ddd97-c952-48e2-b3df-f594646b4377" (UID: "fb8ddd97-c952-48e2-b3df-f594646b4377"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.109789 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fb8ddd97-c952-48e2-b3df-f594646b4377" (UID: "fb8ddd97-c952-48e2-b3df-f594646b4377"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.117285 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.117491 4881 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.117637 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5rkf\" (UniqueName: \"kubernetes.io/projected/fb8ddd97-c952-48e2-b3df-f594646b4377-kube-api-access-c5rkf\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.117778 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.117943 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.118060 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb8ddd97-c952-48e2-b3df-f594646b4377-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.118243 4881 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.118360 4881 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fb8ddd97-c952-48e2-b3df-f594646b4377-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.118501 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb8ddd97-c952-48e2-b3df-f594646b4377-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.137207 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.221498 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.383597 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fb8ddd97-c952-48e2-b3df-f594646b4377","Type":"ContainerDied","Data":"219aa5b2d8ad736ed47da53afae42965ce126831691c9a26656b34955bcafb59"} Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.383644 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="219aa5b2d8ad736ed47da53afae42965ce126831691c9a26656b34955bcafb59" Jan 26 14:13:29 crc kubenswrapper[4881]: I0126 14:13:29.383686 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.033158 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 14:13:36 crc kubenswrapper[4881]: E0126 14:13:36.034638 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerName="extract-utilities" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.034667 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerName="extract-utilities" Jan 26 14:13:36 crc kubenswrapper[4881]: E0126 14:13:36.034705 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerName="registry-server" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.034719 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerName="registry-server" Jan 26 14:13:36 crc kubenswrapper[4881]: E0126 14:13:36.034764 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8ddd97-c952-48e2-b3df-f594646b4377" containerName="tempest-tests-tempest-tests-runner" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.034780 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8ddd97-c952-48e2-b3df-f594646b4377" containerName="tempest-tests-tempest-tests-runner" Jan 26 14:13:36 crc kubenswrapper[4881]: E0126 14:13:36.034847 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerName="extract-content" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.034861 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerName="extract-content" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.035241 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8ddd97-c952-48e2-b3df-f594646b4377" containerName="tempest-tests-tempest-tests-runner" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.035305 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="55706d4b-1c8b-4499-ab1c-2e81988704c1" containerName="registry-server" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.036723 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.040187 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j8j29" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.043033 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.189643 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"21874105-2abf-4ab1-98a6-709151462d2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.189722 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54gcv\" (UniqueName: \"kubernetes.io/projected/21874105-2abf-4ab1-98a6-709151462d2a-kube-api-access-54gcv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"21874105-2abf-4ab1-98a6-709151462d2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.292504 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"21874105-2abf-4ab1-98a6-709151462d2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.292729 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54gcv\" (UniqueName: \"kubernetes.io/projected/21874105-2abf-4ab1-98a6-709151462d2a-kube-api-access-54gcv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"21874105-2abf-4ab1-98a6-709151462d2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.293270 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"21874105-2abf-4ab1-98a6-709151462d2a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.326029 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54gcv\" (UniqueName: \"kubernetes.io/projected/21874105-2abf-4ab1-98a6-709151462d2a-kube-api-access-54gcv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"21874105-2abf-4ab1-98a6-709151462d2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.347658 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"21874105-2abf-4ab1-98a6-709151462d2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.361594 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.823408 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 14:13:36 crc kubenswrapper[4881]: I0126 14:13:36.837097 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:13:37 crc kubenswrapper[4881]: I0126 14:13:37.466497 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"21874105-2abf-4ab1-98a6-709151462d2a","Type":"ContainerStarted","Data":"a0013031ea9260c2d2d956984e0c452cde6284ca1e3cf871422e7e9ee9d0e274"} Jan 26 14:13:38 crc kubenswrapper[4881]: I0126 14:13:38.484502 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"21874105-2abf-4ab1-98a6-709151462d2a","Type":"ContainerStarted","Data":"4949f5ce166eaa11a69318fa90de35fc75cc02d0b172c3189bfc4ebb1b234934"} Jan 26 14:13:38 crc kubenswrapper[4881]: I0126 14:13:38.514357 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.645233477 podStartE2EDuration="2.514332969s" podCreationTimestamp="2026-01-26 14:13:36 +0000 UTC" firstStartedPulling="2026-01-26 14:13:36.836730052 +0000 UTC m=+5889.316040108" lastFinishedPulling="2026-01-26 14:13:37.705829564 +0000 UTC m=+5890.185139600" observedRunningTime="2026-01-26 14:13:38.503363004 +0000 UTC m=+5890.982673040" watchObservedRunningTime="2026-01-26 14:13:38.514332969 +0000 UTC m=+5890.993643005" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.093255 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5hb8g/must-gather-rxwnc"] Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.095216 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/must-gather-rxwnc" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.097917 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5hb8g"/"default-dockercfg-mr5c4" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.098015 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5hb8g"/"kube-root-ca.crt" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.101952 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5hb8g"/"openshift-service-ca.crt" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.135493 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5hb8g/must-gather-rxwnc"] Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.166978 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-must-gather-output\") pod \"must-gather-rxwnc\" (UID: \"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46\") " pod="openshift-must-gather-5hb8g/must-gather-rxwnc" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.167248 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdv4q\" (UniqueName: \"kubernetes.io/projected/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-kube-api-access-jdv4q\") pod \"must-gather-rxwnc\" (UID: \"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46\") " pod="openshift-must-gather-5hb8g/must-gather-rxwnc" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.269091 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-must-gather-output\") pod \"must-gather-rxwnc\" (UID: \"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46\") " pod="openshift-must-gather-5hb8g/must-gather-rxwnc" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.269164 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdv4q\" (UniqueName: \"kubernetes.io/projected/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-kube-api-access-jdv4q\") pod \"must-gather-rxwnc\" (UID: \"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46\") " pod="openshift-must-gather-5hb8g/must-gather-rxwnc" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.269878 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-must-gather-output\") pod \"must-gather-rxwnc\" (UID: \"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46\") " pod="openshift-must-gather-5hb8g/must-gather-rxwnc" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.292680 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdv4q\" (UniqueName: \"kubernetes.io/projected/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-kube-api-access-jdv4q\") pod \"must-gather-rxwnc\" (UID: \"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46\") " pod="openshift-must-gather-5hb8g/must-gather-rxwnc" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.436149 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/must-gather-rxwnc" Jan 26 14:14:06 crc kubenswrapper[4881]: I0126 14:14:06.987912 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5hb8g/must-gather-rxwnc"] Jan 26 14:14:07 crc kubenswrapper[4881]: I0126 14:14:07.830216 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/must-gather-rxwnc" event={"ID":"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46","Type":"ContainerStarted","Data":"a0444f4164939f14be6ca329d5234a7770ded5971062fddeacfa389313cd6b55"} Jan 26 14:14:14 crc kubenswrapper[4881]: I0126 14:14:14.904673 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/must-gather-rxwnc" event={"ID":"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46","Type":"ContainerStarted","Data":"4f10e897c691262ddcc233d9e3ebb983535be0911f77560e896b758ed185744e"} Jan 26 14:14:15 crc kubenswrapper[4881]: I0126 14:14:15.936456 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/must-gather-rxwnc" event={"ID":"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46","Type":"ContainerStarted","Data":"9bf015b0278d9030509b3dd5ec9636bd5f979326a3b7b21b5d93b3e13b8b8e81"} Jan 26 14:14:15 crc kubenswrapper[4881]: I0126 14:14:15.954862 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5hb8g/must-gather-rxwnc" podStartSLOduration=2.499301524 podStartE2EDuration="9.954828833s" podCreationTimestamp="2026-01-26 14:14:06 +0000 UTC" firstStartedPulling="2026-01-26 14:14:06.983960614 +0000 UTC m=+5919.463270660" lastFinishedPulling="2026-01-26 14:14:14.439487943 +0000 UTC m=+5926.918797969" observedRunningTime="2026-01-26 14:14:15.953315846 +0000 UTC m=+5928.432625892" watchObservedRunningTime="2026-01-26 14:14:15.954828833 +0000 UTC m=+5928.434138859" Jan 26 14:14:18 crc kubenswrapper[4881]: I0126 14:14:18.930245 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5hb8g/crc-debug-ssfrp"] Jan 26 14:14:18 crc kubenswrapper[4881]: I0126 14:14:18.932122 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" Jan 26 14:14:19 crc kubenswrapper[4881]: I0126 14:14:19.059274 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1f945b2-2efe-4344-a38a-35a66a2fa236-host\") pod \"crc-debug-ssfrp\" (UID: \"c1f945b2-2efe-4344-a38a-35a66a2fa236\") " pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" Jan 26 14:14:19 crc kubenswrapper[4881]: I0126 14:14:19.059687 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bk8h\" (UniqueName: \"kubernetes.io/projected/c1f945b2-2efe-4344-a38a-35a66a2fa236-kube-api-access-8bk8h\") pod \"crc-debug-ssfrp\" (UID: \"c1f945b2-2efe-4344-a38a-35a66a2fa236\") " pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" Jan 26 14:14:19 crc kubenswrapper[4881]: I0126 14:14:19.161032 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1f945b2-2efe-4344-a38a-35a66a2fa236-host\") pod \"crc-debug-ssfrp\" (UID: \"c1f945b2-2efe-4344-a38a-35a66a2fa236\") " pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" Jan 26 14:14:19 crc kubenswrapper[4881]: I0126 14:14:19.161133 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bk8h\" (UniqueName: \"kubernetes.io/projected/c1f945b2-2efe-4344-a38a-35a66a2fa236-kube-api-access-8bk8h\") pod \"crc-debug-ssfrp\" (UID: \"c1f945b2-2efe-4344-a38a-35a66a2fa236\") " pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" Jan 26 14:14:19 crc kubenswrapper[4881]: I0126 14:14:19.161171 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1f945b2-2efe-4344-a38a-35a66a2fa236-host\") pod \"crc-debug-ssfrp\" (UID: \"c1f945b2-2efe-4344-a38a-35a66a2fa236\") " pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" Jan 26 14:14:19 crc kubenswrapper[4881]: I0126 14:14:19.182589 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bk8h\" (UniqueName: \"kubernetes.io/projected/c1f945b2-2efe-4344-a38a-35a66a2fa236-kube-api-access-8bk8h\") pod \"crc-debug-ssfrp\" (UID: \"c1f945b2-2efe-4344-a38a-35a66a2fa236\") " pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" Jan 26 14:14:19 crc kubenswrapper[4881]: I0126 14:14:19.247782 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" Jan 26 14:14:20 crc kubenswrapper[4881]: I0126 14:14:20.983022 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" event={"ID":"c1f945b2-2efe-4344-a38a-35a66a2fa236","Type":"ContainerStarted","Data":"f6537263b66528d3648640641f55dae143673b23a69092462a364c01aa0fe152"} Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.581684 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-clwcd"] Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.584328 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.611149 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-clwcd"] Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.641871 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-catalog-content\") pod \"redhat-operators-clwcd\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.642002 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-utilities\") pod \"redhat-operators-clwcd\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.642167 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r4vg\" (UniqueName: \"kubernetes.io/projected/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-kube-api-access-7r4vg\") pod \"redhat-operators-clwcd\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.747086 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-catalog-content\") pod \"redhat-operators-clwcd\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.747142 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-utilities\") pod \"redhat-operators-clwcd\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.747186 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r4vg\" (UniqueName: \"kubernetes.io/projected/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-kube-api-access-7r4vg\") pod \"redhat-operators-clwcd\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.748224 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-catalog-content\") pod \"redhat-operators-clwcd\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.748464 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-utilities\") pod \"redhat-operators-clwcd\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.775164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r4vg\" (UniqueName: \"kubernetes.io/projected/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-kube-api-access-7r4vg\") pod \"redhat-operators-clwcd\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:30 crc kubenswrapper[4881]: I0126 14:14:30.918041 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:31 crc kubenswrapper[4881]: I0126 14:14:31.426261 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-clwcd"] Jan 26 14:14:32 crc kubenswrapper[4881]: I0126 14:14:32.107673 4881 generic.go:334] "Generic (PLEG): container finished" podID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerID="c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea" exitCode=0 Jan 26 14:14:32 crc kubenswrapper[4881]: I0126 14:14:32.111146 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clwcd" event={"ID":"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8","Type":"ContainerDied","Data":"c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea"} Jan 26 14:14:32 crc kubenswrapper[4881]: I0126 14:14:32.111200 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clwcd" event={"ID":"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8","Type":"ContainerStarted","Data":"feebbd60341a4d9e1f730cb5c91306ffee76640468fa81664989996baaf61a25"} Jan 26 14:14:32 crc kubenswrapper[4881]: I0126 14:14:32.120795 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" event={"ID":"c1f945b2-2efe-4344-a38a-35a66a2fa236","Type":"ContainerStarted","Data":"254b61f9fe8f66ff8814c826af23b3f474ae7b5a2064adf8f8a5e535925ca8fc"} Jan 26 14:14:32 crc kubenswrapper[4881]: I0126 14:14:32.178141 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" podStartSLOduration=3.157250037 podStartE2EDuration="14.178122865s" podCreationTimestamp="2026-01-26 14:14:18 +0000 UTC" firstStartedPulling="2026-01-26 14:14:20.131511945 +0000 UTC m=+5932.610821961" lastFinishedPulling="2026-01-26 14:14:31.152384763 +0000 UTC m=+5943.631694789" observedRunningTime="2026-01-26 14:14:32.147466903 +0000 UTC m=+5944.626776929" watchObservedRunningTime="2026-01-26 14:14:32.178122865 +0000 UTC m=+5944.657432891" Jan 26 14:14:33 crc kubenswrapper[4881]: I0126 14:14:33.143183 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clwcd" event={"ID":"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8","Type":"ContainerStarted","Data":"27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894"} Jan 26 14:14:38 crc kubenswrapper[4881]: I0126 14:14:38.215562 4881 generic.go:334] "Generic (PLEG): container finished" podID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerID="27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894" exitCode=0 Jan 26 14:14:38 crc kubenswrapper[4881]: I0126 14:14:38.215625 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clwcd" event={"ID":"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8","Type":"ContainerDied","Data":"27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894"} Jan 26 14:14:46 crc kubenswrapper[4881]: I0126 14:14:46.308260 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clwcd" event={"ID":"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8","Type":"ContainerStarted","Data":"0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd"} Jan 26 14:14:46 crc kubenswrapper[4881]: I0126 14:14:46.334585 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-clwcd" podStartSLOduration=3.160575889 podStartE2EDuration="16.334568661s" podCreationTimestamp="2026-01-26 14:14:30 +0000 UTC" firstStartedPulling="2026-01-26 14:14:32.116053622 +0000 UTC m=+5944.595363638" lastFinishedPulling="2026-01-26 14:14:45.290046384 +0000 UTC m=+5957.769356410" observedRunningTime="2026-01-26 14:14:46.330040491 +0000 UTC m=+5958.809350517" watchObservedRunningTime="2026-01-26 14:14:46.334568661 +0000 UTC m=+5958.813878677" Jan 26 14:14:50 crc kubenswrapper[4881]: I0126 14:14:50.917563 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:50 crc kubenswrapper[4881]: I0126 14:14:50.919445 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:14:51 crc kubenswrapper[4881]: I0126 14:14:51.976464 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-clwcd" podUID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerName="registry-server" probeResult="failure" output=< Jan 26 14:14:51 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 14:14:51 crc kubenswrapper[4881]: > Jan 26 14:14:54 crc kubenswrapper[4881]: I0126 14:14:54.789628 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:14:54 crc kubenswrapper[4881]: I0126 14:14:54.790356 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.114423 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mk2md"] Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.117098 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.126433 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mk2md"] Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.265127 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-utilities\") pod \"certified-operators-mk2md\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.265497 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcdx2\" (UniqueName: \"kubernetes.io/projected/7831430a-09bd-4b37-bf63-796b55f36493-kube-api-access-rcdx2\") pod \"certified-operators-mk2md\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.265645 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-catalog-content\") pod \"certified-operators-mk2md\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.367869 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcdx2\" (UniqueName: \"kubernetes.io/projected/7831430a-09bd-4b37-bf63-796b55f36493-kube-api-access-rcdx2\") pod \"certified-operators-mk2md\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.368013 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-catalog-content\") pod \"certified-operators-mk2md\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.368122 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-utilities\") pod \"certified-operators-mk2md\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.368654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-utilities\") pod \"certified-operators-mk2md\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.368877 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-catalog-content\") pod \"certified-operators-mk2md\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.408590 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcdx2\" (UniqueName: \"kubernetes.io/projected/7831430a-09bd-4b37-bf63-796b55f36493-kube-api-access-rcdx2\") pod \"certified-operators-mk2md\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:55 crc kubenswrapper[4881]: I0126 14:14:55.496142 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:14:57 crc kubenswrapper[4881]: I0126 14:14:57.279916 4881 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.152355366s: [/var/lib/containers/storage/overlay/fc3b85b5da6f03f4d31a44d96d2b5297b57595e4f3462e0b866eddc2cb24e707/diff /var/log/pods/openstack-operators_openstack-operator-index-wmbzq_51923b46-00ba-4a5e-984d-b1f8febec058/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 26 14:14:57 crc kubenswrapper[4881]: I0126 14:14:57.777815 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mk2md"] Jan 26 14:14:58 crc kubenswrapper[4881]: I0126 14:14:58.430950 4881 generic.go:334] "Generic (PLEG): container finished" podID="7831430a-09bd-4b37-bf63-796b55f36493" containerID="9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a" exitCode=0 Jan 26 14:14:58 crc kubenswrapper[4881]: I0126 14:14:58.431071 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mk2md" event={"ID":"7831430a-09bd-4b37-bf63-796b55f36493","Type":"ContainerDied","Data":"9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a"} Jan 26 14:14:58 crc kubenswrapper[4881]: I0126 14:14:58.431252 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mk2md" event={"ID":"7831430a-09bd-4b37-bf63-796b55f36493","Type":"ContainerStarted","Data":"385c5bb43770d68a34b40f3d6b50635a62c4420eaa5f518dd16bbba111e47641"} Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.146447 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl"] Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.150463 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.155825 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.155928 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.167844 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl"] Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.271044 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b47b66fd-96da-4070-b381-fef6ebeefe27-secret-volume\") pod \"collect-profiles-29490615-jg9dl\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.271128 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b47b66fd-96da-4070-b381-fef6ebeefe27-config-volume\") pod \"collect-profiles-29490615-jg9dl\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.271233 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf2bx\" (UniqueName: \"kubernetes.io/projected/b47b66fd-96da-4070-b381-fef6ebeefe27-kube-api-access-qf2bx\") pod \"collect-profiles-29490615-jg9dl\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.373186 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b47b66fd-96da-4070-b381-fef6ebeefe27-secret-volume\") pod \"collect-profiles-29490615-jg9dl\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.373252 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b47b66fd-96da-4070-b381-fef6ebeefe27-config-volume\") pod \"collect-profiles-29490615-jg9dl\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.373277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf2bx\" (UniqueName: \"kubernetes.io/projected/b47b66fd-96da-4070-b381-fef6ebeefe27-kube-api-access-qf2bx\") pod \"collect-profiles-29490615-jg9dl\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.374364 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b47b66fd-96da-4070-b381-fef6ebeefe27-config-volume\") pod \"collect-profiles-29490615-jg9dl\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.379384 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b47b66fd-96da-4070-b381-fef6ebeefe27-secret-volume\") pod \"collect-profiles-29490615-jg9dl\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.390474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf2bx\" (UniqueName: \"kubernetes.io/projected/b47b66fd-96da-4070-b381-fef6ebeefe27-kube-api-access-qf2bx\") pod \"collect-profiles-29490615-jg9dl\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.450256 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mk2md" event={"ID":"7831430a-09bd-4b37-bf63-796b55f36493","Type":"ContainerStarted","Data":"393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331"} Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.509974 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:00 crc kubenswrapper[4881]: I0126 14:15:00.971869 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:15:01 crc kubenswrapper[4881]: I0126 14:15:01.022496 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:15:01 crc kubenswrapper[4881]: I0126 14:15:01.177761 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl"] Jan 26 14:15:01 crc kubenswrapper[4881]: W0126 14:15:01.213925 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb47b66fd_96da_4070_b381_fef6ebeefe27.slice/crio-bafdd86dd0225543f909e556ed9cd8bff78770c91bd7987b7cfb0dc4a2883d16 WatchSource:0}: Error finding container bafdd86dd0225543f909e556ed9cd8bff78770c91bd7987b7cfb0dc4a2883d16: Status 404 returned error can't find the container with id bafdd86dd0225543f909e556ed9cd8bff78770c91bd7987b7cfb0dc4a2883d16 Jan 26 14:15:01 crc kubenswrapper[4881]: I0126 14:15:01.462206 4881 generic.go:334] "Generic (PLEG): container finished" podID="7831430a-09bd-4b37-bf63-796b55f36493" containerID="393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331" exitCode=0 Jan 26 14:15:01 crc kubenswrapper[4881]: I0126 14:15:01.462390 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mk2md" event={"ID":"7831430a-09bd-4b37-bf63-796b55f36493","Type":"ContainerDied","Data":"393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331"} Jan 26 14:15:01 crc kubenswrapper[4881]: I0126 14:15:01.467670 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" event={"ID":"b47b66fd-96da-4070-b381-fef6ebeefe27","Type":"ContainerStarted","Data":"6cb1d105fc3fddf08f3de7813d0913e380ddb1175be7b7ce699df192d9e02197"} Jan 26 14:15:01 crc kubenswrapper[4881]: I0126 14:15:01.467834 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" event={"ID":"b47b66fd-96da-4070-b381-fef6ebeefe27","Type":"ContainerStarted","Data":"bafdd86dd0225543f909e556ed9cd8bff78770c91bd7987b7cfb0dc4a2883d16"} Jan 26 14:15:01 crc kubenswrapper[4881]: I0126 14:15:01.495216 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-clwcd"] Jan 26 14:15:01 crc kubenswrapper[4881]: I0126 14:15:01.518376 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" podStartSLOduration=1.518354537 podStartE2EDuration="1.518354537s" podCreationTimestamp="2026-01-26 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:15:01.503752124 +0000 UTC m=+5973.983062150" watchObservedRunningTime="2026-01-26 14:15:01.518354537 +0000 UTC m=+5973.997664563" Jan 26 14:15:01 crc kubenswrapper[4881]: E0126 14:15:01.801159 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb47b66fd_96da_4070_b381_fef6ebeefe27.slice/crio-6cb1d105fc3fddf08f3de7813d0913e380ddb1175be7b7ce699df192d9e02197.scope\": RecentStats: unable to find data in memory cache]" Jan 26 14:15:02 crc kubenswrapper[4881]: I0126 14:15:02.484229 4881 generic.go:334] "Generic (PLEG): container finished" podID="b47b66fd-96da-4070-b381-fef6ebeefe27" containerID="6cb1d105fc3fddf08f3de7813d0913e380ddb1175be7b7ce699df192d9e02197" exitCode=0 Jan 26 14:15:02 crc kubenswrapper[4881]: I0126 14:15:02.484884 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" event={"ID":"b47b66fd-96da-4070-b381-fef6ebeefe27","Type":"ContainerDied","Data":"6cb1d105fc3fddf08f3de7813d0913e380ddb1175be7b7ce699df192d9e02197"} Jan 26 14:15:02 crc kubenswrapper[4881]: I0126 14:15:02.485117 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-clwcd" podUID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerName="registry-server" containerID="cri-o://0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd" gracePeriod=2 Jan 26 14:15:02 crc kubenswrapper[4881]: I0126 14:15:02.976391 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.052212 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-utilities\") pod \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.052267 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r4vg\" (UniqueName: \"kubernetes.io/projected/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-kube-api-access-7r4vg\") pod \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.052353 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-catalog-content\") pod \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\" (UID: \"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8\") " Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.053105 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-utilities" (OuterVolumeSpecName: "utilities") pod "8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" (UID: "8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.059294 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-kube-api-access-7r4vg" (OuterVolumeSpecName: "kube-api-access-7r4vg") pod "8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" (UID: "8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8"). InnerVolumeSpecName "kube-api-access-7r4vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.158117 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.158164 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r4vg\" (UniqueName: \"kubernetes.io/projected/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-kube-api-access-7r4vg\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.193274 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" (UID: "8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.259787 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.496697 4881 generic.go:334] "Generic (PLEG): container finished" podID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerID="0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd" exitCode=0 Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.496769 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clwcd" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.496819 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clwcd" event={"ID":"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8","Type":"ContainerDied","Data":"0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd"} Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.497852 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clwcd" event={"ID":"8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8","Type":"ContainerDied","Data":"feebbd60341a4d9e1f730cb5c91306ffee76640468fa81664989996baaf61a25"} Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.497872 4881 scope.go:117] "RemoveContainer" containerID="0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.501056 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mk2md" event={"ID":"7831430a-09bd-4b37-bf63-796b55f36493","Type":"ContainerStarted","Data":"63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675"} Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.522525 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mk2md" podStartSLOduration=4.543561958 podStartE2EDuration="8.522495795s" podCreationTimestamp="2026-01-26 14:14:55 +0000 UTC" firstStartedPulling="2026-01-26 14:14:58.432247206 +0000 UTC m=+5970.911557232" lastFinishedPulling="2026-01-26 14:15:02.411181043 +0000 UTC m=+5974.890491069" observedRunningTime="2026-01-26 14:15:03.521624435 +0000 UTC m=+5976.000934471" watchObservedRunningTime="2026-01-26 14:15:03.522495795 +0000 UTC m=+5976.001805821" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.525019 4881 scope.go:117] "RemoveContainer" containerID="27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.541253 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-clwcd"] Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.549088 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-clwcd"] Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.554674 4881 scope.go:117] "RemoveContainer" containerID="c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.660152 4881 scope.go:117] "RemoveContainer" containerID="0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd" Jan 26 14:15:03 crc kubenswrapper[4881]: E0126 14:15:03.660571 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd\": container with ID starting with 0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd not found: ID does not exist" containerID="0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.660615 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd"} err="failed to get container status \"0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd\": rpc error: code = NotFound desc = could not find container \"0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd\": container with ID starting with 0a707b96eba7b4ff0f8216a6c8802b651a4d22521e10aa08ea393ee369c85afd not found: ID does not exist" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.660638 4881 scope.go:117] "RemoveContainer" containerID="27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894" Jan 26 14:15:03 crc kubenswrapper[4881]: E0126 14:15:03.660844 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894\": container with ID starting with 27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894 not found: ID does not exist" containerID="27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.660867 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894"} err="failed to get container status \"27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894\": rpc error: code = NotFound desc = could not find container \"27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894\": container with ID starting with 27dfc343722438ac43ee0d39ff1216f9a35521346e4d8618b47075ea62eb7894 not found: ID does not exist" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.660881 4881 scope.go:117] "RemoveContainer" containerID="c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea" Jan 26 14:15:03 crc kubenswrapper[4881]: E0126 14:15:03.661059 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea\": container with ID starting with c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea not found: ID does not exist" containerID="c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.661081 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea"} err="failed to get container status \"c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea\": rpc error: code = NotFound desc = could not find container \"c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea\": container with ID starting with c39fc9b795ca8708a07bfdad133898488bf793dca184020f98e136bcac5bdcea not found: ID does not exist" Jan 26 14:15:03 crc kubenswrapper[4881]: I0126 14:15:03.933797 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.074510 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf2bx\" (UniqueName: \"kubernetes.io/projected/b47b66fd-96da-4070-b381-fef6ebeefe27-kube-api-access-qf2bx\") pod \"b47b66fd-96da-4070-b381-fef6ebeefe27\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.074640 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b47b66fd-96da-4070-b381-fef6ebeefe27-secret-volume\") pod \"b47b66fd-96da-4070-b381-fef6ebeefe27\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.074790 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b47b66fd-96da-4070-b381-fef6ebeefe27-config-volume\") pod \"b47b66fd-96da-4070-b381-fef6ebeefe27\" (UID: \"b47b66fd-96da-4070-b381-fef6ebeefe27\") " Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.075496 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b47b66fd-96da-4070-b381-fef6ebeefe27-config-volume" (OuterVolumeSpecName: "config-volume") pod "b47b66fd-96da-4070-b381-fef6ebeefe27" (UID: "b47b66fd-96da-4070-b381-fef6ebeefe27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.081761 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b47b66fd-96da-4070-b381-fef6ebeefe27-kube-api-access-qf2bx" (OuterVolumeSpecName: "kube-api-access-qf2bx") pod "b47b66fd-96da-4070-b381-fef6ebeefe27" (UID: "b47b66fd-96da-4070-b381-fef6ebeefe27"). InnerVolumeSpecName "kube-api-access-qf2bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.088667 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47b66fd-96da-4070-b381-fef6ebeefe27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b47b66fd-96da-4070-b381-fef6ebeefe27" (UID: "b47b66fd-96da-4070-b381-fef6ebeefe27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.093296 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" path="/var/lib/kubelet/pods/8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8/volumes" Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.177504 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b47b66fd-96da-4070-b381-fef6ebeefe27-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.177561 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b47b66fd-96da-4070-b381-fef6ebeefe27-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.177574 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf2bx\" (UniqueName: \"kubernetes.io/projected/b47b66fd-96da-4070-b381-fef6ebeefe27-kube-api-access-qf2bx\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.252790 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj"] Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.261574 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490570-pjxtj"] Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.518756 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" event={"ID":"b47b66fd-96da-4070-b381-fef6ebeefe27","Type":"ContainerDied","Data":"bafdd86dd0225543f909e556ed9cd8bff78770c91bd7987b7cfb0dc4a2883d16"} Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.518828 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bafdd86dd0225543f909e556ed9cd8bff78770c91bd7987b7cfb0dc4a2883d16" Jan 26 14:15:04 crc kubenswrapper[4881]: I0126 14:15:04.518784 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-jg9dl" Jan 26 14:15:05 crc kubenswrapper[4881]: I0126 14:15:05.496874 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:15:05 crc kubenswrapper[4881]: I0126 14:15:05.497187 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:15:05 crc kubenswrapper[4881]: I0126 14:15:05.603738 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:15:06 crc kubenswrapper[4881]: I0126 14:15:06.097280 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3196a145-c310-4e39-ab2e-e6e44993f4e9" path="/var/lib/kubelet/pods/3196a145-c310-4e39-ab2e-e6e44993f4e9/volumes" Jan 26 14:15:15 crc kubenswrapper[4881]: I0126 14:15:15.559195 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:15:15 crc kubenswrapper[4881]: I0126 14:15:15.625572 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mk2md"] Jan 26 14:15:15 crc kubenswrapper[4881]: I0126 14:15:15.636471 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mk2md" podUID="7831430a-09bd-4b37-bf63-796b55f36493" containerName="registry-server" containerID="cri-o://63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675" gracePeriod=2 Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.126935 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.226607 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-catalog-content\") pod \"7831430a-09bd-4b37-bf63-796b55f36493\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.227046 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-utilities\") pod \"7831430a-09bd-4b37-bf63-796b55f36493\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.227120 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcdx2\" (UniqueName: \"kubernetes.io/projected/7831430a-09bd-4b37-bf63-796b55f36493-kube-api-access-rcdx2\") pod \"7831430a-09bd-4b37-bf63-796b55f36493\" (UID: \"7831430a-09bd-4b37-bf63-796b55f36493\") " Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.227670 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-utilities" (OuterVolumeSpecName: "utilities") pod "7831430a-09bd-4b37-bf63-796b55f36493" (UID: "7831430a-09bd-4b37-bf63-796b55f36493"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.246668 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7831430a-09bd-4b37-bf63-796b55f36493-kube-api-access-rcdx2" (OuterVolumeSpecName: "kube-api-access-rcdx2") pod "7831430a-09bd-4b37-bf63-796b55f36493" (UID: "7831430a-09bd-4b37-bf63-796b55f36493"). InnerVolumeSpecName "kube-api-access-rcdx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.279173 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7831430a-09bd-4b37-bf63-796b55f36493" (UID: "7831430a-09bd-4b37-bf63-796b55f36493"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.329405 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.329434 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcdx2\" (UniqueName: \"kubernetes.io/projected/7831430a-09bd-4b37-bf63-796b55f36493-kube-api-access-rcdx2\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.329444 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7831430a-09bd-4b37-bf63-796b55f36493-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.651485 4881 generic.go:334] "Generic (PLEG): container finished" podID="7831430a-09bd-4b37-bf63-796b55f36493" containerID="63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675" exitCode=0 Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.651549 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mk2md" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.651561 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mk2md" event={"ID":"7831430a-09bd-4b37-bf63-796b55f36493","Type":"ContainerDied","Data":"63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675"} Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.651618 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mk2md" event={"ID":"7831430a-09bd-4b37-bf63-796b55f36493","Type":"ContainerDied","Data":"385c5bb43770d68a34b40f3d6b50635a62c4420eaa5f518dd16bbba111e47641"} Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.651640 4881 scope.go:117] "RemoveContainer" containerID="63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.690287 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mk2md"] Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.692400 4881 scope.go:117] "RemoveContainer" containerID="393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.702221 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mk2md"] Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.730910 4881 scope.go:117] "RemoveContainer" containerID="9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.778273 4881 scope.go:117] "RemoveContainer" containerID="63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675" Jan 26 14:15:16 crc kubenswrapper[4881]: E0126 14:15:16.779176 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675\": container with ID starting with 63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675 not found: ID does not exist" containerID="63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.779208 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675"} err="failed to get container status \"63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675\": rpc error: code = NotFound desc = could not find container \"63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675\": container with ID starting with 63e9f714bfdd0a0d8c43c19f8f395bb687f4ad130a581990700b2dea43283675 not found: ID does not exist" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.779229 4881 scope.go:117] "RemoveContainer" containerID="393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331" Jan 26 14:15:16 crc kubenswrapper[4881]: E0126 14:15:16.779449 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331\": container with ID starting with 393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331 not found: ID does not exist" containerID="393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.779475 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331"} err="failed to get container status \"393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331\": rpc error: code = NotFound desc = could not find container \"393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331\": container with ID starting with 393c67aac157e995f9666c4018f453d08d97a978d87b9788e67fbc0d0fb37331 not found: ID does not exist" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.779491 4881 scope.go:117] "RemoveContainer" containerID="9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a" Jan 26 14:15:16 crc kubenswrapper[4881]: E0126 14:15:16.779689 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a\": container with ID starting with 9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a not found: ID does not exist" containerID="9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a" Jan 26 14:15:16 crc kubenswrapper[4881]: I0126 14:15:16.779760 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a"} err="failed to get container status \"9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a\": rpc error: code = NotFound desc = could not find container \"9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a\": container with ID starting with 9309acd9ef4506186983470c9b119c793e532eea20ff116c78c6f4c4ab849a4a not found: ID does not exist" Jan 26 14:15:18 crc kubenswrapper[4881]: I0126 14:15:18.099069 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7831430a-09bd-4b37-bf63-796b55f36493" path="/var/lib/kubelet/pods/7831430a-09bd-4b37-bf63-796b55f36493/volumes" Jan 26 14:15:24 crc kubenswrapper[4881]: I0126 14:15:24.789552 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:15:24 crc kubenswrapper[4881]: I0126 14:15:24.790035 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:15:26 crc kubenswrapper[4881]: I0126 14:15:26.751882 4881 generic.go:334] "Generic (PLEG): container finished" podID="c1f945b2-2efe-4344-a38a-35a66a2fa236" containerID="254b61f9fe8f66ff8814c826af23b3f474ae7b5a2064adf8f8a5e535925ca8fc" exitCode=0 Jan 26 14:15:26 crc kubenswrapper[4881]: I0126 14:15:26.752003 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" event={"ID":"c1f945b2-2efe-4344-a38a-35a66a2fa236","Type":"ContainerDied","Data":"254b61f9fe8f66ff8814c826af23b3f474ae7b5a2064adf8f8a5e535925ca8fc"} Jan 26 14:15:27 crc kubenswrapper[4881]: I0126 14:15:27.898861 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" Jan 26 14:15:27 crc kubenswrapper[4881]: I0126 14:15:27.951272 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5hb8g/crc-debug-ssfrp"] Jan 26 14:15:27 crc kubenswrapper[4881]: I0126 14:15:27.964303 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5hb8g/crc-debug-ssfrp"] Jan 26 14:15:27 crc kubenswrapper[4881]: I0126 14:15:27.978882 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bk8h\" (UniqueName: \"kubernetes.io/projected/c1f945b2-2efe-4344-a38a-35a66a2fa236-kube-api-access-8bk8h\") pod \"c1f945b2-2efe-4344-a38a-35a66a2fa236\" (UID: \"c1f945b2-2efe-4344-a38a-35a66a2fa236\") " Jan 26 14:15:27 crc kubenswrapper[4881]: I0126 14:15:27.979026 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1f945b2-2efe-4344-a38a-35a66a2fa236-host\") pod \"c1f945b2-2efe-4344-a38a-35a66a2fa236\" (UID: \"c1f945b2-2efe-4344-a38a-35a66a2fa236\") " Jan 26 14:15:27 crc kubenswrapper[4881]: I0126 14:15:27.979117 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f945b2-2efe-4344-a38a-35a66a2fa236-host" (OuterVolumeSpecName: "host") pod "c1f945b2-2efe-4344-a38a-35a66a2fa236" (UID: "c1f945b2-2efe-4344-a38a-35a66a2fa236"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:15:27 crc kubenswrapper[4881]: I0126 14:15:27.979856 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1f945b2-2efe-4344-a38a-35a66a2fa236-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:27 crc kubenswrapper[4881]: I0126 14:15:27.988682 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f945b2-2efe-4344-a38a-35a66a2fa236-kube-api-access-8bk8h" (OuterVolumeSpecName: "kube-api-access-8bk8h") pod "c1f945b2-2efe-4344-a38a-35a66a2fa236" (UID: "c1f945b2-2efe-4344-a38a-35a66a2fa236"). InnerVolumeSpecName "kube-api-access-8bk8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:28 crc kubenswrapper[4881]: I0126 14:15:28.081376 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bk8h\" (UniqueName: \"kubernetes.io/projected/c1f945b2-2efe-4344-a38a-35a66a2fa236-kube-api-access-8bk8h\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:28 crc kubenswrapper[4881]: I0126 14:15:28.094790 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f945b2-2efe-4344-a38a-35a66a2fa236" path="/var/lib/kubelet/pods/c1f945b2-2efe-4344-a38a-35a66a2fa236/volumes" Jan 26 14:15:28 crc kubenswrapper[4881]: I0126 14:15:28.773189 4881 scope.go:117] "RemoveContainer" containerID="254b61f9fe8f66ff8814c826af23b3f474ae7b5a2064adf8f8a5e535925ca8fc" Jan 26 14:15:28 crc kubenswrapper[4881]: I0126 14:15:28.773588 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-ssfrp" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.194849 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5hb8g/crc-debug-pdx6q"] Jan 26 14:15:29 crc kubenswrapper[4881]: E0126 14:15:29.195716 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7831430a-09bd-4b37-bf63-796b55f36493" containerName="extract-content" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.195736 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7831430a-09bd-4b37-bf63-796b55f36493" containerName="extract-content" Jan 26 14:15:29 crc kubenswrapper[4881]: E0126 14:15:29.195753 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7831430a-09bd-4b37-bf63-796b55f36493" containerName="extract-utilities" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.195761 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7831430a-09bd-4b37-bf63-796b55f36493" containerName="extract-utilities" Jan 26 14:15:29 crc kubenswrapper[4881]: E0126 14:15:29.195778 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47b66fd-96da-4070-b381-fef6ebeefe27" containerName="collect-profiles" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.195790 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47b66fd-96da-4070-b381-fef6ebeefe27" containerName="collect-profiles" Jan 26 14:15:29 crc kubenswrapper[4881]: E0126 14:15:29.195803 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f945b2-2efe-4344-a38a-35a66a2fa236" containerName="container-00" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.195810 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f945b2-2efe-4344-a38a-35a66a2fa236" containerName="container-00" Jan 26 14:15:29 crc kubenswrapper[4881]: E0126 14:15:29.195820 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7831430a-09bd-4b37-bf63-796b55f36493" containerName="registry-server" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.195827 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7831430a-09bd-4b37-bf63-796b55f36493" containerName="registry-server" Jan 26 14:15:29 crc kubenswrapper[4881]: E0126 14:15:29.195850 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerName="registry-server" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.195857 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerName="registry-server" Jan 26 14:15:29 crc kubenswrapper[4881]: E0126 14:15:29.195867 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerName="extract-utilities" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.195874 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerName="extract-utilities" Jan 26 14:15:29 crc kubenswrapper[4881]: E0126 14:15:29.195887 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerName="extract-content" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.195894 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerName="extract-content" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.196119 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b47b66fd-96da-4070-b381-fef6ebeefe27" containerName="collect-profiles" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.196139 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7831430a-09bd-4b37-bf63-796b55f36493" containerName="registry-server" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.196154 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f945b2-2efe-4344-a38a-35a66a2fa236" containerName="container-00" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.196166 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4b0ffc-fd65-4dbd-a7d7-536862aaf4c8" containerName="registry-server" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.196989 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.206183 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f349e8ec-df28-4358-ad99-57620157cd67-host\") pod \"crc-debug-pdx6q\" (UID: \"f349e8ec-df28-4358-ad99-57620157cd67\") " pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.206895 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2lm7\" (UniqueName: \"kubernetes.io/projected/f349e8ec-df28-4358-ad99-57620157cd67-kube-api-access-s2lm7\") pod \"crc-debug-pdx6q\" (UID: \"f349e8ec-df28-4358-ad99-57620157cd67\") " pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.313818 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2lm7\" (UniqueName: \"kubernetes.io/projected/f349e8ec-df28-4358-ad99-57620157cd67-kube-api-access-s2lm7\") pod \"crc-debug-pdx6q\" (UID: \"f349e8ec-df28-4358-ad99-57620157cd67\") " pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.314000 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f349e8ec-df28-4358-ad99-57620157cd67-host\") pod \"crc-debug-pdx6q\" (UID: \"f349e8ec-df28-4358-ad99-57620157cd67\") " pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.315840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f349e8ec-df28-4358-ad99-57620157cd67-host\") pod \"crc-debug-pdx6q\" (UID: \"f349e8ec-df28-4358-ad99-57620157cd67\") " pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.346930 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2lm7\" (UniqueName: \"kubernetes.io/projected/f349e8ec-df28-4358-ad99-57620157cd67-kube-api-access-s2lm7\") pod \"crc-debug-pdx6q\" (UID: \"f349e8ec-df28-4358-ad99-57620157cd67\") " pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.514208 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" Jan 26 14:15:29 crc kubenswrapper[4881]: I0126 14:15:29.785174 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" event={"ID":"f349e8ec-df28-4358-ad99-57620157cd67","Type":"ContainerStarted","Data":"d892cbe9acda70101b522bdc848090042030a5eb842acf02d731b35c27978483"} Jan 26 14:15:30 crc kubenswrapper[4881]: I0126 14:15:30.795049 4881 generic.go:334] "Generic (PLEG): container finished" podID="f349e8ec-df28-4358-ad99-57620157cd67" containerID="7b75404309917a4694a61b035b9f50e601af95edad7ab65dea48d3e2748cd7d3" exitCode=0 Jan 26 14:15:30 crc kubenswrapper[4881]: I0126 14:15:30.795124 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" event={"ID":"f349e8ec-df28-4358-ad99-57620157cd67","Type":"ContainerDied","Data":"7b75404309917a4694a61b035b9f50e601af95edad7ab65dea48d3e2748cd7d3"} Jan 26 14:15:31 crc kubenswrapper[4881]: I0126 14:15:31.944366 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" Jan 26 14:15:31 crc kubenswrapper[4881]: I0126 14:15:31.972498 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2lm7\" (UniqueName: \"kubernetes.io/projected/f349e8ec-df28-4358-ad99-57620157cd67-kube-api-access-s2lm7\") pod \"f349e8ec-df28-4358-ad99-57620157cd67\" (UID: \"f349e8ec-df28-4358-ad99-57620157cd67\") " Jan 26 14:15:31 crc kubenswrapper[4881]: I0126 14:15:31.972722 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f349e8ec-df28-4358-ad99-57620157cd67-host\") pod \"f349e8ec-df28-4358-ad99-57620157cd67\" (UID: \"f349e8ec-df28-4358-ad99-57620157cd67\") " Jan 26 14:15:31 crc kubenswrapper[4881]: I0126 14:15:31.974481 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f349e8ec-df28-4358-ad99-57620157cd67-host" (OuterVolumeSpecName: "host") pod "f349e8ec-df28-4358-ad99-57620157cd67" (UID: "f349e8ec-df28-4358-ad99-57620157cd67"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:15:31 crc kubenswrapper[4881]: I0126 14:15:31.992319 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f349e8ec-df28-4358-ad99-57620157cd67-kube-api-access-s2lm7" (OuterVolumeSpecName: "kube-api-access-s2lm7") pod "f349e8ec-df28-4358-ad99-57620157cd67" (UID: "f349e8ec-df28-4358-ad99-57620157cd67"). InnerVolumeSpecName "kube-api-access-s2lm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:32 crc kubenswrapper[4881]: I0126 14:15:32.076196 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2lm7\" (UniqueName: \"kubernetes.io/projected/f349e8ec-df28-4358-ad99-57620157cd67-kube-api-access-s2lm7\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:32 crc kubenswrapper[4881]: I0126 14:15:32.076225 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f349e8ec-df28-4358-ad99-57620157cd67-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:32 crc kubenswrapper[4881]: E0126 14:15:32.656870 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice/crio-f6537263b66528d3648640641f55dae143673b23a69092462a364c01aa0fe152\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice\": RecentStats: unable to find data in memory cache]" Jan 26 14:15:32 crc kubenswrapper[4881]: I0126 14:15:32.815431 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" event={"ID":"f349e8ec-df28-4358-ad99-57620157cd67","Type":"ContainerDied","Data":"d892cbe9acda70101b522bdc848090042030a5eb842acf02d731b35c27978483"} Jan 26 14:15:32 crc kubenswrapper[4881]: I0126 14:15:32.815473 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d892cbe9acda70101b522bdc848090042030a5eb842acf02d731b35c27978483" Jan 26 14:15:32 crc kubenswrapper[4881]: I0126 14:15:32.815538 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-pdx6q" Jan 26 14:15:33 crc kubenswrapper[4881]: I0126 14:15:33.149052 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5hb8g/crc-debug-pdx6q"] Jan 26 14:15:33 crc kubenswrapper[4881]: I0126 14:15:33.161667 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5hb8g/crc-debug-pdx6q"] Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.093625 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f349e8ec-df28-4358-ad99-57620157cd67" path="/var/lib/kubelet/pods/f349e8ec-df28-4358-ad99-57620157cd67/volumes" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.536849 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5hb8g/crc-debug-57hqb"] Jan 26 14:15:34 crc kubenswrapper[4881]: E0126 14:15:34.537257 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f349e8ec-df28-4358-ad99-57620157cd67" containerName="container-00" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.537268 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f349e8ec-df28-4358-ad99-57620157cd67" containerName="container-00" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.537462 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f349e8ec-df28-4358-ad99-57620157cd67" containerName="container-00" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.538175 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-57hqb" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.636965 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmxg\" (UniqueName: \"kubernetes.io/projected/d536eca9-908b-4f64-b11f-642d39cb0c72-kube-api-access-kxmxg\") pod \"crc-debug-57hqb\" (UID: \"d536eca9-908b-4f64-b11f-642d39cb0c72\") " pod="openshift-must-gather-5hb8g/crc-debug-57hqb" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.637259 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d536eca9-908b-4f64-b11f-642d39cb0c72-host\") pod \"crc-debug-57hqb\" (UID: \"d536eca9-908b-4f64-b11f-642d39cb0c72\") " pod="openshift-must-gather-5hb8g/crc-debug-57hqb" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.739486 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmxg\" (UniqueName: \"kubernetes.io/projected/d536eca9-908b-4f64-b11f-642d39cb0c72-kube-api-access-kxmxg\") pod \"crc-debug-57hqb\" (UID: \"d536eca9-908b-4f64-b11f-642d39cb0c72\") " pod="openshift-must-gather-5hb8g/crc-debug-57hqb" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.739883 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d536eca9-908b-4f64-b11f-642d39cb0c72-host\") pod \"crc-debug-57hqb\" (UID: \"d536eca9-908b-4f64-b11f-642d39cb0c72\") " pod="openshift-must-gather-5hb8g/crc-debug-57hqb" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.740010 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d536eca9-908b-4f64-b11f-642d39cb0c72-host\") pod \"crc-debug-57hqb\" (UID: \"d536eca9-908b-4f64-b11f-642d39cb0c72\") " pod="openshift-must-gather-5hb8g/crc-debug-57hqb" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.761419 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmxg\" (UniqueName: \"kubernetes.io/projected/d536eca9-908b-4f64-b11f-642d39cb0c72-kube-api-access-kxmxg\") pod \"crc-debug-57hqb\" (UID: \"d536eca9-908b-4f64-b11f-642d39cb0c72\") " pod="openshift-must-gather-5hb8g/crc-debug-57hqb" Jan 26 14:15:34 crc kubenswrapper[4881]: I0126 14:15:34.865305 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-57hqb" Jan 26 14:15:35 crc kubenswrapper[4881]: I0126 14:15:35.846045 4881 generic.go:334] "Generic (PLEG): container finished" podID="d536eca9-908b-4f64-b11f-642d39cb0c72" containerID="3426554ed6ba28147a8f1dc0e2ecf452f04b65a6d3114a9490768d85c7ca0799" exitCode=0 Jan 26 14:15:35 crc kubenswrapper[4881]: I0126 14:15:35.846221 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/crc-debug-57hqb" event={"ID":"d536eca9-908b-4f64-b11f-642d39cb0c72","Type":"ContainerDied","Data":"3426554ed6ba28147a8f1dc0e2ecf452f04b65a6d3114a9490768d85c7ca0799"} Jan 26 14:15:35 crc kubenswrapper[4881]: I0126 14:15:35.846667 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/crc-debug-57hqb" event={"ID":"d536eca9-908b-4f64-b11f-642d39cb0c72","Type":"ContainerStarted","Data":"09fcb5e5f53b2746d6f0c93b0d0c402d22bd04df4def04bf17035f0ba5eb037b"} Jan 26 14:15:35 crc kubenswrapper[4881]: I0126 14:15:35.889799 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5hb8g/crc-debug-57hqb"] Jan 26 14:15:35 crc kubenswrapper[4881]: I0126 14:15:35.897295 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5hb8g/crc-debug-57hqb"] Jan 26 14:15:36 crc kubenswrapper[4881]: I0126 14:15:36.997128 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-57hqb" Jan 26 14:15:37 crc kubenswrapper[4881]: I0126 14:15:37.085691 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d536eca9-908b-4f64-b11f-642d39cb0c72-host\") pod \"d536eca9-908b-4f64-b11f-642d39cb0c72\" (UID: \"d536eca9-908b-4f64-b11f-642d39cb0c72\") " Jan 26 14:15:37 crc kubenswrapper[4881]: I0126 14:15:37.085831 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d536eca9-908b-4f64-b11f-642d39cb0c72-host" (OuterVolumeSpecName: "host") pod "d536eca9-908b-4f64-b11f-642d39cb0c72" (UID: "d536eca9-908b-4f64-b11f-642d39cb0c72"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:15:37 crc kubenswrapper[4881]: I0126 14:15:37.085953 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxmxg\" (UniqueName: \"kubernetes.io/projected/d536eca9-908b-4f64-b11f-642d39cb0c72-kube-api-access-kxmxg\") pod \"d536eca9-908b-4f64-b11f-642d39cb0c72\" (UID: \"d536eca9-908b-4f64-b11f-642d39cb0c72\") " Jan 26 14:15:37 crc kubenswrapper[4881]: I0126 14:15:37.086289 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d536eca9-908b-4f64-b11f-642d39cb0c72-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:37 crc kubenswrapper[4881]: I0126 14:15:37.093798 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d536eca9-908b-4f64-b11f-642d39cb0c72-kube-api-access-kxmxg" (OuterVolumeSpecName: "kube-api-access-kxmxg") pod "d536eca9-908b-4f64-b11f-642d39cb0c72" (UID: "d536eca9-908b-4f64-b11f-642d39cb0c72"). InnerVolumeSpecName "kube-api-access-kxmxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:37 crc kubenswrapper[4881]: I0126 14:15:37.188308 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxmxg\" (UniqueName: \"kubernetes.io/projected/d536eca9-908b-4f64-b11f-642d39cb0c72-kube-api-access-kxmxg\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:37 crc kubenswrapper[4881]: I0126 14:15:37.870269 4881 scope.go:117] "RemoveContainer" containerID="3426554ed6ba28147a8f1dc0e2ecf452f04b65a6d3114a9490768d85c7ca0799" Jan 26 14:15:37 crc kubenswrapper[4881]: I0126 14:15:37.870700 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/crc-debug-57hqb" Jan 26 14:15:38 crc kubenswrapper[4881]: I0126 14:15:38.100783 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d536eca9-908b-4f64-b11f-642d39cb0c72" path="/var/lib/kubelet/pods/d536eca9-908b-4f64-b11f-642d39cb0c72/volumes" Jan 26 14:15:42 crc kubenswrapper[4881]: E0126 14:15:42.966110 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice/crio-f6537263b66528d3648640641f55dae143673b23a69092462a364c01aa0fe152\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice\": RecentStats: unable to find data in memory cache]" Jan 26 14:15:46 crc kubenswrapper[4881]: I0126 14:15:46.737029 4881 trace.go:236] Trace[401123821]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-marketplace-22x8z" (26-Jan-2026 14:15:44.151) (total time: 2585ms): Jan 26 14:15:46 crc kubenswrapper[4881]: Trace[401123821]: [2.585703732s] [2.585703732s] END Jan 26 14:15:53 crc kubenswrapper[4881]: E0126 14:15:53.231081 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice/crio-f6537263b66528d3648640641f55dae143673b23a69092462a364c01aa0fe152\": RecentStats: unable to find data in memory cache]" Jan 26 14:15:54 crc kubenswrapper[4881]: I0126 14:15:54.789205 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:15:54 crc kubenswrapper[4881]: I0126 14:15:54.789589 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:15:54 crc kubenswrapper[4881]: I0126 14:15:54.789650 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 14:15:54 crc kubenswrapper[4881]: I0126 14:15:54.790466 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:15:54 crc kubenswrapper[4881]: I0126 14:15:54.790549 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" gracePeriod=600 Jan 26 14:15:54 crc kubenswrapper[4881]: E0126 14:15:54.913207 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:15:55 crc kubenswrapper[4881]: I0126 14:15:55.044081 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" exitCode=0 Jan 26 14:15:55 crc kubenswrapper[4881]: I0126 14:15:55.044131 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b"} Jan 26 14:15:55 crc kubenswrapper[4881]: I0126 14:15:55.044164 4881 scope.go:117] "RemoveContainer" containerID="5d9c212bf85e65e885cc5aa035e7cd2c999fe929b1600de85ef25a70258967d4" Jan 26 14:15:55 crc kubenswrapper[4881]: I0126 14:15:55.044885 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:15:55 crc kubenswrapper[4881]: E0126 14:15:55.045212 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.465130 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w46zl"] Jan 26 14:15:56 crc kubenswrapper[4881]: E0126 14:15:56.465847 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d536eca9-908b-4f64-b11f-642d39cb0c72" containerName="container-00" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.465859 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d536eca9-908b-4f64-b11f-642d39cb0c72" containerName="container-00" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.466044 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d536eca9-908b-4f64-b11f-642d39cb0c72" containerName="container-00" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.467410 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.479758 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w46zl"] Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.616806 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-catalog-content\") pod \"redhat-marketplace-w46zl\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.616882 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54x2c\" (UniqueName: \"kubernetes.io/projected/c086439c-46f2-4b30-bf54-4a1e71f78f6f-kube-api-access-54x2c\") pod \"redhat-marketplace-w46zl\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.616964 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-utilities\") pod \"redhat-marketplace-w46zl\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.718881 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-utilities\") pod \"redhat-marketplace-w46zl\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.719048 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-catalog-content\") pod \"redhat-marketplace-w46zl\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.719085 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54x2c\" (UniqueName: \"kubernetes.io/projected/c086439c-46f2-4b30-bf54-4a1e71f78f6f-kube-api-access-54x2c\") pod \"redhat-marketplace-w46zl\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.719489 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-utilities\") pod \"redhat-marketplace-w46zl\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.719576 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-catalog-content\") pod \"redhat-marketplace-w46zl\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.746686 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54x2c\" (UniqueName: \"kubernetes.io/projected/c086439c-46f2-4b30-bf54-4a1e71f78f6f-kube-api-access-54x2c\") pod \"redhat-marketplace-w46zl\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:56 crc kubenswrapper[4881]: I0126 14:15:56.783106 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:15:57 crc kubenswrapper[4881]: W0126 14:15:57.322062 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc086439c_46f2_4b30_bf54_4a1e71f78f6f.slice/crio-c34b603ec09bf8e7af987157ef0aee361f1660c80020b75feabf7b30bc661eaf WatchSource:0}: Error finding container c34b603ec09bf8e7af987157ef0aee361f1660c80020b75feabf7b30bc661eaf: Status 404 returned error can't find the container with id c34b603ec09bf8e7af987157ef0aee361f1660c80020b75feabf7b30bc661eaf Jan 26 14:15:57 crc kubenswrapper[4881]: I0126 14:15:57.339658 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w46zl"] Jan 26 14:15:57 crc kubenswrapper[4881]: I0126 14:15:57.556582 4881 scope.go:117] "RemoveContainer" containerID="1289fff4699c84f890d7b124be30444b4be3aa90591968acd4f1960df88f3624" Jan 26 14:15:58 crc kubenswrapper[4881]: I0126 14:15:58.078644 4881 generic.go:334] "Generic (PLEG): container finished" podID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerID="e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a" exitCode=0 Jan 26 14:15:58 crc kubenswrapper[4881]: I0126 14:15:58.078700 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w46zl" event={"ID":"c086439c-46f2-4b30-bf54-4a1e71f78f6f","Type":"ContainerDied","Data":"e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a"} Jan 26 14:15:58 crc kubenswrapper[4881]: I0126 14:15:58.078745 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w46zl" event={"ID":"c086439c-46f2-4b30-bf54-4a1e71f78f6f","Type":"ContainerStarted","Data":"c34b603ec09bf8e7af987157ef0aee361f1660c80020b75feabf7b30bc661eaf"} Jan 26 14:15:59 crc kubenswrapper[4881]: I0126 14:15:59.087818 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w46zl" event={"ID":"c086439c-46f2-4b30-bf54-4a1e71f78f6f","Type":"ContainerStarted","Data":"b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc"} Jan 26 14:16:00 crc kubenswrapper[4881]: I0126 14:16:00.104644 4881 generic.go:334] "Generic (PLEG): container finished" podID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerID="b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc" exitCode=0 Jan 26 14:16:00 crc kubenswrapper[4881]: I0126 14:16:00.104692 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w46zl" event={"ID":"c086439c-46f2-4b30-bf54-4a1e71f78f6f","Type":"ContainerDied","Data":"b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc"} Jan 26 14:16:01 crc kubenswrapper[4881]: I0126 14:16:01.117689 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w46zl" event={"ID":"c086439c-46f2-4b30-bf54-4a1e71f78f6f","Type":"ContainerStarted","Data":"010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d"} Jan 26 14:16:01 crc kubenswrapper[4881]: I0126 14:16:01.143153 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w46zl" podStartSLOduration=2.714775011 podStartE2EDuration="5.143132496s" podCreationTimestamp="2026-01-26 14:15:56 +0000 UTC" firstStartedPulling="2026-01-26 14:15:58.080397779 +0000 UTC m=+6030.559707805" lastFinishedPulling="2026-01-26 14:16:00.508755254 +0000 UTC m=+6032.988065290" observedRunningTime="2026-01-26 14:16:01.138484084 +0000 UTC m=+6033.617794130" watchObservedRunningTime="2026-01-26 14:16:01.143132496 +0000 UTC m=+6033.622442522" Jan 26 14:16:03 crc kubenswrapper[4881]: E0126 14:16:03.559793 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice/crio-f6537263b66528d3648640641f55dae143673b23a69092462a364c01aa0fe152\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice\": RecentStats: unable to find data in memory cache]" Jan 26 14:16:06 crc kubenswrapper[4881]: I0126 14:16:06.783427 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:16:06 crc kubenswrapper[4881]: I0126 14:16:06.784033 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:16:06 crc kubenswrapper[4881]: I0126 14:16:06.848679 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:16:07 crc kubenswrapper[4881]: I0126 14:16:07.265136 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:16:07 crc kubenswrapper[4881]: I0126 14:16:07.330253 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w46zl"] Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.084417 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:16:09 crc kubenswrapper[4881]: E0126 14:16:09.084935 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.215741 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w46zl" podUID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerName="registry-server" containerID="cri-o://010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d" gracePeriod=2 Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.769851 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.815267 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-utilities\") pod \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.815394 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54x2c\" (UniqueName: \"kubernetes.io/projected/c086439c-46f2-4b30-bf54-4a1e71f78f6f-kube-api-access-54x2c\") pod \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.815745 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-catalog-content\") pod \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\" (UID: \"c086439c-46f2-4b30-bf54-4a1e71f78f6f\") " Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.816301 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-utilities" (OuterVolumeSpecName: "utilities") pod "c086439c-46f2-4b30-bf54-4a1e71f78f6f" (UID: "c086439c-46f2-4b30-bf54-4a1e71f78f6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.821287 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c086439c-46f2-4b30-bf54-4a1e71f78f6f-kube-api-access-54x2c" (OuterVolumeSpecName: "kube-api-access-54x2c") pod "c086439c-46f2-4b30-bf54-4a1e71f78f6f" (UID: "c086439c-46f2-4b30-bf54-4a1e71f78f6f"). InnerVolumeSpecName "kube-api-access-54x2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.843278 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c086439c-46f2-4b30-bf54-4a1e71f78f6f" (UID: "c086439c-46f2-4b30-bf54-4a1e71f78f6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.918199 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.918533 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c086439c-46f2-4b30-bf54-4a1e71f78f6f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:16:09 crc kubenswrapper[4881]: I0126 14:16:09.918608 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54x2c\" (UniqueName: \"kubernetes.io/projected/c086439c-46f2-4b30-bf54-4a1e71f78f6f-kube-api-access-54x2c\") on node \"crc\" DevicePath \"\"" Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.234842 4881 generic.go:334] "Generic (PLEG): container finished" podID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerID="010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d" exitCode=0 Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.234892 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w46zl" event={"ID":"c086439c-46f2-4b30-bf54-4a1e71f78f6f","Type":"ContainerDied","Data":"010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d"} Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.234922 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w46zl" event={"ID":"c086439c-46f2-4b30-bf54-4a1e71f78f6f","Type":"ContainerDied","Data":"c34b603ec09bf8e7af987157ef0aee361f1660c80020b75feabf7b30bc661eaf"} Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.234945 4881 scope.go:117] "RemoveContainer" containerID="010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d" Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.235090 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w46zl" Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.270657 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w46zl"] Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.271935 4881 scope.go:117] "RemoveContainer" containerID="b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc" Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.286530 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w46zl"] Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.319248 4881 scope.go:117] "RemoveContainer" containerID="e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a" Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.365058 4881 scope.go:117] "RemoveContainer" containerID="010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d" Jan 26 14:16:10 crc kubenswrapper[4881]: E0126 14:16:10.366058 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d\": container with ID starting with 010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d not found: ID does not exist" containerID="010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d" Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.366105 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d"} err="failed to get container status \"010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d\": rpc error: code = NotFound desc = could not find container \"010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d\": container with ID starting with 010313df286c82aa31c0bfa37d6be27719bb6eb68bf43004383be1977661565d not found: ID does not exist" Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.366131 4881 scope.go:117] "RemoveContainer" containerID="b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc" Jan 26 14:16:10 crc kubenswrapper[4881]: E0126 14:16:10.367277 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc\": container with ID starting with b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc not found: ID does not exist" containerID="b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc" Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.367307 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc"} err="failed to get container status \"b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc\": rpc error: code = NotFound desc = could not find container \"b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc\": container with ID starting with b1498009863e29c675c93ee114def71e24b171a48d81d0e5e6092831de882adc not found: ID does not exist" Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.367329 4881 scope.go:117] "RemoveContainer" containerID="e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a" Jan 26 14:16:10 crc kubenswrapper[4881]: E0126 14:16:10.367716 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a\": container with ID starting with e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a not found: ID does not exist" containerID="e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a" Jan 26 14:16:10 crc kubenswrapper[4881]: I0126 14:16:10.367748 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a"} err="failed to get container status \"e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a\": rpc error: code = NotFound desc = could not find container \"e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a\": container with ID starting with e7fbd5b0f29f2f7778291046d82336bb31701fd09923cdb2f3474744c268280a not found: ID does not exist" Jan 26 14:16:12 crc kubenswrapper[4881]: I0126 14:16:12.096656 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" path="/var/lib/kubelet/pods/c086439c-46f2-4b30-bf54-4a1e71f78f6f/volumes" Jan 26 14:16:13 crc kubenswrapper[4881]: I0126 14:16:13.140776 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b8956874b-lrmsj_a3ad04d4-bcff-4ed6-8648-be146e3ce20a/barbican-api/0.log" Jan 26 14:16:13 crc kubenswrapper[4881]: I0126 14:16:13.288316 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b8956874b-lrmsj_a3ad04d4-bcff-4ed6-8648-be146e3ce20a/barbican-api-log/0.log" Jan 26 14:16:13 crc kubenswrapper[4881]: I0126 14:16:13.337626 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b5f886498-f6c5n_2c982f75-f27c-4915-a75d-07f2fc53cf19/barbican-keystone-listener/0.log" Jan 26 14:16:13 crc kubenswrapper[4881]: I0126 14:16:13.439637 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b5f886498-f6c5n_2c982f75-f27c-4915-a75d-07f2fc53cf19/barbican-keystone-listener-log/0.log" Jan 26 14:16:13 crc kubenswrapper[4881]: I0126 14:16:13.525638 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-794958b545-pcbpb_275be453-0a36-451c-8a70-714b958c9625/barbican-worker/0.log" Jan 26 14:16:13 crc kubenswrapper[4881]: I0126 14:16:13.573279 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-794958b545-pcbpb_275be453-0a36-451c-8a70-714b958c9625/barbican-worker-log/0.log" Jan 26 14:16:13 crc kubenswrapper[4881]: I0126 14:16:13.760955 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff_d817df95-5b02-462d-86b6-289f9decf3d3/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:13 crc kubenswrapper[4881]: E0126 14:16:13.803374 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice/crio-f6537263b66528d3648640641f55dae143673b23a69092462a364c01aa0fe152\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice\": RecentStats: unable to find data in memory cache]" Jan 26 14:16:13 crc kubenswrapper[4881]: I0126 14:16:13.871289 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3c188b0-972c-46b1-bb59-edce2c6b4f54/ceilometer-central-agent/0.log" Jan 26 14:16:13 crc kubenswrapper[4881]: I0126 14:16:13.940141 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3c188b0-972c-46b1-bb59-edce2c6b4f54/ceilometer-notification-agent/0.log" Jan 26 14:16:13 crc kubenswrapper[4881]: I0126 14:16:13.993978 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3c188b0-972c-46b1-bb59-edce2c6b4f54/proxy-httpd/0.log" Jan 26 14:16:14 crc kubenswrapper[4881]: I0126 14:16:14.005075 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3c188b0-972c-46b1-bb59-edce2c6b4f54/sg-core/0.log" Jan 26 14:16:14 crc kubenswrapper[4881]: I0126 14:16:14.209547 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1/cinder-api-log/0.log" Jan 26 14:16:14 crc kubenswrapper[4881]: I0126 14:16:14.456133 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4b9b969a-e1a3-4253-bffc-34fe8db0a2ff/probe/0.log" Jan 26 14:16:14 crc kubenswrapper[4881]: I0126 14:16:14.633061 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4b9b969a-e1a3-4253-bffc-34fe8db0a2ff/cinder-backup/0.log" Jan 26 14:16:14 crc kubenswrapper[4881]: I0126 14:16:14.726406 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c/cinder-scheduler/0.log" Jan 26 14:16:14 crc kubenswrapper[4881]: I0126 14:16:14.732128 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1/cinder-api/0.log" Jan 26 14:16:14 crc kubenswrapper[4881]: I0126 14:16:14.794801 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c/probe/0.log" Jan 26 14:16:14 crc kubenswrapper[4881]: I0126 14:16:14.958416 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_7fe1aa7f-b1bd-4777-934a-76e8ba531b1b/probe/0.log" Jan 26 14:16:15 crc kubenswrapper[4881]: I0126 14:16:15.086696 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_7fe1aa7f-b1bd-4777-934a-76e8ba531b1b/cinder-volume/0.log" Jan 26 14:16:15 crc kubenswrapper[4881]: I0126 14:16:15.231432 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_77b81e80-c2b2-418e-b722-2f6ffa1b7103/cinder-volume/0.log" Jan 26 14:16:15 crc kubenswrapper[4881]: I0126 14:16:15.252848 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_77b81e80-c2b2-418e-b722-2f6ffa1b7103/probe/0.log" Jan 26 14:16:15 crc kubenswrapper[4881]: I0126 14:16:15.319386 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp_d828b00c-a7ee-47c8-b98a-2529ccf16cc6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:15 crc kubenswrapper[4881]: I0126 14:16:15.483812 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-t8v57_12cb0be5-3cea-4264-9b33-42194fe4991c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:15 crc kubenswrapper[4881]: I0126 14:16:15.608533 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66968b76ff-2mrpm_ca4b205e-5485-43e7-ab0c-b6cfae7c9a18/init/0.log" Jan 26 14:16:15 crc kubenswrapper[4881]: I0126 14:16:15.763566 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66968b76ff-2mrpm_ca4b205e-5485-43e7-ab0c-b6cfae7c9a18/init/0.log" Jan 26 14:16:15 crc kubenswrapper[4881]: I0126 14:16:15.860705 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-brjhw_b4d6e825-d231-4128-bd5d-3db56fbef5ec/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:16 crc kubenswrapper[4881]: I0126 14:16:16.026148 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66968b76ff-2mrpm_ca4b205e-5485-43e7-ab0c-b6cfae7c9a18/dnsmasq-dns/0.log" Jan 26 14:16:16 crc kubenswrapper[4881]: I0126 14:16:16.070353 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbfc472b-0aa5-4053-88cb-6efd65de5e79/glance-log/0.log" Jan 26 14:16:16 crc kubenswrapper[4881]: I0126 14:16:16.074597 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbfc472b-0aa5-4053-88cb-6efd65de5e79/glance-httpd/0.log" Jan 26 14:16:16 crc kubenswrapper[4881]: I0126 14:16:16.254274 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_82a0349b-c23f-4b1d-991e-f738d5c1ecee/glance-httpd/0.log" Jan 26 14:16:16 crc kubenswrapper[4881]: I0126 14:16:16.293330 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_82a0349b-c23f-4b1d-991e-f738d5c1ecee/glance-log/0.log" Jan 26 14:16:16 crc kubenswrapper[4881]: I0126 14:16:16.548450 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7bf7cc86f8-h94sx_92e11d67-ecbe-4005-849c-40a16f3d3faa/horizon/0.log" Jan 26 14:16:16 crc kubenswrapper[4881]: I0126 14:16:16.614693 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-42rrp_0f847b90-9682-4b21-8ccc-646996b89f4f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:16 crc kubenswrapper[4881]: I0126 14:16:16.752903 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lbcjj_ceaad5ff-3f46-431b-817c-669e0f038898/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:17 crc kubenswrapper[4881]: I0126 14:16:17.017162 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490541-c6h96_2c2380fd-0233-4942-8e8a-433cc3b15925/keystone-cron/0.log" Jan 26 14:16:17 crc kubenswrapper[4881]: I0126 14:16:17.283592 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490601-x4d85_ce705aca-2bb5-4314-aa70-a71cc77303d8/keystone-cron/0.log" Jan 26 14:16:17 crc kubenswrapper[4881]: I0126 14:16:17.287601 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7bf7cc86f8-h94sx_92e11d67-ecbe-4005-849c-40a16f3d3faa/horizon-log/0.log" Jan 26 14:16:17 crc kubenswrapper[4881]: I0126 14:16:17.304034 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a8e06d36-6fd3-40af-8066-f1cbb8d46a16/kube-state-metrics/0.log" Jan 26 14:16:17 crc kubenswrapper[4881]: I0126 14:16:17.400772 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-76cf66855-bgjld_bda630e6-c611-4029-9a8a-b347189d2fab/keystone-api/0.log" Jan 26 14:16:17 crc kubenswrapper[4881]: I0126 14:16:17.599630 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8_2190fb1e-77a2-47d2-a0bb-2aaca7948653/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:17 crc kubenswrapper[4881]: I0126 14:16:17.931832 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl_41c88a4a-b833-417f-90c9-eb0edcf688ec/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:18 crc kubenswrapper[4881]: I0126 14:16:18.017328 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5555bb9565-2bdtt_666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9/neutron-httpd/0.log" Jan 26 14:16:18 crc kubenswrapper[4881]: I0126 14:16:18.053536 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5555bb9565-2bdtt_666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9/neutron-api/0.log" Jan 26 14:16:18 crc kubenswrapper[4881]: I0126 14:16:18.613097 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_52e24417-da70-449b-847d-3a0f1516ac9f/nova-cell0-conductor-conductor/0.log" Jan 26 14:16:18 crc kubenswrapper[4881]: I0126 14:16:18.944152 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c039ccd0-c1d4-438f-ba49-d44103884a26/nova-cell1-conductor-conductor/0.log" Jan 26 14:16:19 crc kubenswrapper[4881]: I0126 14:16:19.253602 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_04587c7a-8d6d-4587-9ca5-52d8a9e57a38/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 14:16:19 crc kubenswrapper[4881]: I0126 14:16:19.465789 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-xpfht_7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:19 crc kubenswrapper[4881]: I0126 14:16:19.612019 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ee44b824-a50b-4355-ab08-09d831323258/nova-api-log/0.log" Jan 26 14:16:19 crc kubenswrapper[4881]: I0126 14:16:19.752327 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae/nova-metadata-log/0.log" Jan 26 14:16:19 crc kubenswrapper[4881]: I0126 14:16:19.953559 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ee44b824-a50b-4355-ab08-09d831323258/nova-api-api/0.log" Jan 26 14:16:20 crc kubenswrapper[4881]: I0126 14:16:20.201553 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c8b6753b-d929-47d6-84ec-b72094efad83/mysql-bootstrap/0.log" Jan 26 14:16:20 crc kubenswrapper[4881]: I0126 14:16:20.231681 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_01bf74e7-115e-4392-93c4-f6c5c578c5dc/nova-scheduler-scheduler/0.log" Jan 26 14:16:20 crc kubenswrapper[4881]: I0126 14:16:20.386072 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c8b6753b-d929-47d6-84ec-b72094efad83/mysql-bootstrap/0.log" Jan 26 14:16:20 crc kubenswrapper[4881]: I0126 14:16:20.463230 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c8b6753b-d929-47d6-84ec-b72094efad83/galera/0.log" Jan 26 14:16:20 crc kubenswrapper[4881]: I0126 14:16:20.583014 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32ed51d8-b401-412f-925e-0cff27777e55/mysql-bootstrap/0.log" Jan 26 14:16:20 crc kubenswrapper[4881]: I0126 14:16:20.795435 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32ed51d8-b401-412f-925e-0cff27777e55/mysql-bootstrap/0.log" Jan 26 14:16:20 crc kubenswrapper[4881]: I0126 14:16:20.923014 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32ed51d8-b401-412f-925e-0cff27777e55/galera/0.log" Jan 26 14:16:21 crc kubenswrapper[4881]: I0126 14:16:21.051958 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6bb7934d-1b01-469b-9b72-c601eebbbf98/openstackclient/0.log" Jan 26 14:16:21 crc kubenswrapper[4881]: I0126 14:16:21.085478 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:16:21 crc kubenswrapper[4881]: E0126 14:16:21.085924 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:16:21 crc kubenswrapper[4881]: I0126 14:16:21.184563 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jh6zr_bbc286b3-4266-462b-b661-d072e9843683/openstack-network-exporter/0.log" Jan 26 14:16:21 crc kubenswrapper[4881]: I0126 14:16:21.401956 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrqqp_31e8c456-b53d-456e-a8d1-69f26e0602ad/ovsdb-server-init/0.log" Jan 26 14:16:21 crc kubenswrapper[4881]: I0126 14:16:21.581050 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrqqp_31e8c456-b53d-456e-a8d1-69f26e0602ad/ovsdb-server-init/0.log" Jan 26 14:16:21 crc kubenswrapper[4881]: I0126 14:16:21.584343 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrqqp_31e8c456-b53d-456e-a8d1-69f26e0602ad/ovsdb-server/0.log" Jan 26 14:16:21 crc kubenswrapper[4881]: I0126 14:16:21.760285 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vs5xn_e52cbcc1-521d-4a7d-98a6-50ab70a2f82f/ovn-controller/0.log" Jan 26 14:16:21 crc kubenswrapper[4881]: I0126 14:16:21.945649 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrqqp_31e8c456-b53d-456e-a8d1-69f26e0602ad/ovs-vswitchd/0.log" Jan 26 14:16:21 crc kubenswrapper[4881]: I0126 14:16:21.999696 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae/nova-metadata-metadata/0.log" Jan 26 14:16:22 crc kubenswrapper[4881]: I0126 14:16:22.041414 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ltf5b_b36ca725-d6f7-4551-84f6-e912cdc75a5f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:22 crc kubenswrapper[4881]: I0126 14:16:22.187388 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a10988d9-411e-42a7-82bf-8ed88569d801/openstack-network-exporter/0.log" Jan 26 14:16:22 crc kubenswrapper[4881]: I0126 14:16:22.247666 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a10988d9-411e-42a7-82bf-8ed88569d801/ovn-northd/0.log" Jan 26 14:16:22 crc kubenswrapper[4881]: I0126 14:16:22.365449 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_406b36b2-d29f-4224-8bc5-9cfd6f057a48/openstack-network-exporter/0.log" Jan 26 14:16:22 crc kubenswrapper[4881]: I0126 14:16:22.439924 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_406b36b2-d29f-4224-8bc5-9cfd6f057a48/ovsdbserver-nb/0.log" Jan 26 14:16:22 crc kubenswrapper[4881]: I0126 14:16:22.781125 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7/ovsdbserver-sb/0.log" Jan 26 14:16:22 crc kubenswrapper[4881]: I0126 14:16:22.873029 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7/openstack-network-exporter/0.log" Jan 26 14:16:23 crc kubenswrapper[4881]: I0126 14:16:23.254620 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-54766b76bb-mkjc2_34699df9-2dd8-4eee-9f19-e5af28cfa84d/placement-api/0.log" Jan 26 14:16:23 crc kubenswrapper[4881]: I0126 14:16:23.256974 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_adce7384-2dc6-4e86-af0f-fb3b38627515/init-config-reloader/0.log" Jan 26 14:16:23 crc kubenswrapper[4881]: I0126 14:16:23.399505 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-54766b76bb-mkjc2_34699df9-2dd8-4eee-9f19-e5af28cfa84d/placement-log/0.log" Jan 26 14:16:23 crc kubenswrapper[4881]: I0126 14:16:23.514755 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_adce7384-2dc6-4e86-af0f-fb3b38627515/prometheus/0.log" Jan 26 14:16:23 crc kubenswrapper[4881]: I0126 14:16:23.581454 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_adce7384-2dc6-4e86-af0f-fb3b38627515/config-reloader/0.log" Jan 26 14:16:23 crc kubenswrapper[4881]: I0126 14:16:23.648837 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_adce7384-2dc6-4e86-af0f-fb3b38627515/init-config-reloader/0.log" Jan 26 14:16:23 crc kubenswrapper[4881]: I0126 14:16:23.671825 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_adce7384-2dc6-4e86-af0f-fb3b38627515/thanos-sidecar/0.log" Jan 26 14:16:23 crc kubenswrapper[4881]: I0126 14:16:23.911481 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2a9efa3-8ac2-40ec-a543-b3a2013e8b39/setup-container/0.log" Jan 26 14:16:24 crc kubenswrapper[4881]: E0126 14:16:24.129820 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice/crio-f6537263b66528d3648640641f55dae143673b23a69092462a364c01aa0fe152\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f945b2_2efe_4344_a38a_35a66a2fa236.slice\": RecentStats: unable to find data in memory cache]" Jan 26 14:16:24 crc kubenswrapper[4881]: I0126 14:16:24.132540 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2a9efa3-8ac2-40ec-a543-b3a2013e8b39/setup-container/0.log" Jan 26 14:16:24 crc kubenswrapper[4881]: I0126 14:16:24.151657 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_ab9a358b-8713-4790-a9c4-97b89efcc88f/setup-container/0.log" Jan 26 14:16:24 crc kubenswrapper[4881]: I0126 14:16:24.202243 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2a9efa3-8ac2-40ec-a543-b3a2013e8b39/rabbitmq/0.log" Jan 26 14:16:24 crc kubenswrapper[4881]: I0126 14:16:24.474140 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_ab9a358b-8713-4790-a9c4-97b89efcc88f/setup-container/0.log" Jan 26 14:16:24 crc kubenswrapper[4881]: I0126 14:16:24.498987 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_ab9a358b-8713-4790-a9c4-97b89efcc88f/rabbitmq/0.log" Jan 26 14:16:24 crc kubenswrapper[4881]: I0126 14:16:24.561041 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a258482a-e394-4833-9bef-1fc3abc0c6a7/setup-container/0.log" Jan 26 14:16:24 crc kubenswrapper[4881]: I0126 14:16:24.754574 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a258482a-e394-4833-9bef-1fc3abc0c6a7/rabbitmq/0.log" Jan 26 14:16:24 crc kubenswrapper[4881]: I0126 14:16:24.782896 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a258482a-e394-4833-9bef-1fc3abc0c6a7/setup-container/0.log" Jan 26 14:16:24 crc kubenswrapper[4881]: I0126 14:16:24.802427 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw_760f5f9c-04ad-4788-886d-8f301f2f487b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:25 crc kubenswrapper[4881]: I0126 14:16:25.161043 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-gjsdk_07589eac-a07d-4781-a341-cbe2e35872a4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:25 crc kubenswrapper[4881]: I0126 14:16:25.221412 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22_3aaf2506-3e1b-4edd-af45-c98419359e59/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:25 crc kubenswrapper[4881]: I0126 14:16:25.416248 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-54xff_81f1ca9f-e8b5-477d-b628-8a60848a7fe2/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:25 crc kubenswrapper[4881]: I0126 14:16:25.538545 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-n5zvt_39dc7ed5-dafd-4ef4-94a7-509fc1568f5a/ssh-known-hosts-edpm-deployment/0.log" Jan 26 14:16:25 crc kubenswrapper[4881]: I0126 14:16:25.672300 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55f986558f-qqwqs_4b3ea251-a4e4-4e4d-a21f-a239f80690e1/proxy-server/0.log" Jan 26 14:16:25 crc kubenswrapper[4881]: I0126 14:16:25.819707 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dmkh9_14ccca3b-65a8-4df1-9905-b21bfb24e5be/swift-ring-rebalance/0.log" Jan 26 14:16:25 crc kubenswrapper[4881]: I0126 14:16:25.863292 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55f986558f-qqwqs_4b3ea251-a4e4-4e4d-a21f-a239f80690e1/proxy-httpd/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.018275 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/account-reaper/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.027565 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/account-auditor/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.162727 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/account-replicator/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.255753 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/container-auditor/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.257197 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/account-server/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.263638 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/container-replicator/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.361288 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/container-server/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.428269 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/container-updater/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.469113 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/object-expirer/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.499996 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/object-auditor/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.611103 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/object-server/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.620550 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/object-replicator/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.694545 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/object-updater/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.738378 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/rsync/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.861070 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/swift-recon-cron/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.957761 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_a6421b4d-2505-47c4-899d-7f7bd2113cf8/memcached/0.log" Jan 26 14:16:26 crc kubenswrapper[4881]: I0126 14:16:26.964119 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w_1062b2c8-e4fb-4999-aecf-a04dd5157826/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:27 crc kubenswrapper[4881]: I0126 14:16:27.037482 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_fb8ddd97-c952-48e2-b3df-f594646b4377/tempest-tests-tempest-tests-runner/0.log" Jan 26 14:16:27 crc kubenswrapper[4881]: I0126 14:16:27.163089 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh_f8d3d257-2cd9-42b9-aeb9-462b635c53dc/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:16:27 crc kubenswrapper[4881]: I0126 14:16:27.172915 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_21874105-2abf-4ab1-98a6-709151462d2a/test-operator-logs-container/0.log" Jan 26 14:16:27 crc kubenswrapper[4881]: I0126 14:16:27.773624 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_1b363415-cc56-4505-91f6-f9700b378625/watcher-applier/0.log" Jan 26 14:16:28 crc kubenswrapper[4881]: I0126 14:16:28.591790 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_bacf9a45-b73a-41bd-9c12-eb112ddcfaf2/watcher-api-log/0.log" Jan 26 14:16:30 crc kubenswrapper[4881]: I0126 14:16:30.595478 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_e321419e-1316-442e-b8f1-2f4a2451203f/watcher-decision-engine/0.log" Jan 26 14:16:31 crc kubenswrapper[4881]: I0126 14:16:31.496070 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_bacf9a45-b73a-41bd-9c12-eb112ddcfaf2/watcher-api/0.log" Jan 26 14:16:36 crc kubenswrapper[4881]: I0126 14:16:36.082622 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:16:36 crc kubenswrapper[4881]: E0126 14:16:36.083288 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:16:51 crc kubenswrapper[4881]: I0126 14:16:51.083420 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:16:51 crc kubenswrapper[4881]: E0126 14:16:51.084738 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:16:53 crc kubenswrapper[4881]: I0126 14:16:53.194478 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/util/0.log" Jan 26 14:16:53 crc kubenswrapper[4881]: I0126 14:16:53.376948 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/pull/0.log" Jan 26 14:16:53 crc kubenswrapper[4881]: I0126 14:16:53.388603 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/util/0.log" Jan 26 14:16:53 crc kubenswrapper[4881]: I0126 14:16:53.392422 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/pull/0.log" Jan 26 14:16:53 crc kubenswrapper[4881]: I0126 14:16:53.542610 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/pull/0.log" Jan 26 14:16:53 crc kubenswrapper[4881]: I0126 14:16:53.546120 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/util/0.log" Jan 26 14:16:53 crc kubenswrapper[4881]: I0126 14:16:53.592768 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/extract/0.log" Jan 26 14:16:53 crc kubenswrapper[4881]: I0126 14:16:53.839800 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-hs8xc_e8b8ff3a-c099-4192-b061-33ff69fd2884/manager/0.log" Jan 26 14:16:53 crc kubenswrapper[4881]: I0126 14:16:53.861706 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-w2n48_d5375dff-af5c-4de8-b52b-acf18edc4fb2/manager/0.log" Jan 26 14:16:53 crc kubenswrapper[4881]: I0126 14:16:53.956819 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-lgljg_c5cecd8b-813f-4bde-be28-371c54bcdfb9/manager/0.log" Jan 26 14:16:54 crc kubenswrapper[4881]: I0126 14:16:54.081169 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-sgnd2_4508aa9d-2a89-4976-bd36-dc918900371e/manager/0.log" Jan 26 14:16:54 crc kubenswrapper[4881]: I0126 14:16:54.208767 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-v95fq_0e17a034-e3c9-434a-838f-8bfae6d010dd/manager/0.log" Jan 26 14:16:54 crc kubenswrapper[4881]: I0126 14:16:54.251273 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-rww6v_d998c88b-6b01-4e5f-bbab-a5aaee1a945b/manager/0.log" Jan 26 14:16:54 crc kubenswrapper[4881]: I0126 14:16:54.496912 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-flv4v_cbafad55-0cc5-42d6-b721-b1f4e158251f/manager/0.log" Jan 26 14:16:54 crc kubenswrapper[4881]: I0126 14:16:54.654375 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-wkhcm_517d3e74-cfe4-4e5e-96b0-0780042b0dbd/manager/0.log" Jan 26 14:16:54 crc kubenswrapper[4881]: I0126 14:16:54.707268 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-66d48_b6807e2b-25b9-4802-8086-2c6eab9ff308/manager/0.log" Jan 26 14:16:54 crc kubenswrapper[4881]: I0126 14:16:54.824062 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-sc6f8_78a91159-fead-4133-98e4-5dd587f6b274/manager/0.log" Jan 26 14:16:54 crc kubenswrapper[4881]: I0126 14:16:54.926942 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-zkhss_97b268cc-1863-494c-a47b-da0c52f76d39/manager/0.log" Jan 26 14:16:55 crc kubenswrapper[4881]: I0126 14:16:55.131436 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-m8vjc_76b071ae-05bc-4142-9004-e5528d00c5cc/manager/0.log" Jan 26 14:16:55 crc kubenswrapper[4881]: I0126 14:16:55.196430 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-r7dwj_d808c58e-a8df-4cbd-aee6-d87edd677e94/manager/0.log" Jan 26 14:16:55 crc kubenswrapper[4881]: I0126 14:16:55.276396 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-r2p67_3451b01c-ed54-49be-ab3a-d8150976d2ec/manager/0.log" Jan 26 14:16:55 crc kubenswrapper[4881]: I0126 14:16:55.338152 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854jj499_8cc0e35b-757a-46fc-bc17-f586426c9b82/manager/0.log" Jan 26 14:16:55 crc kubenswrapper[4881]: I0126 14:16:55.606065 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6fbc4c9d5c-k7d5p_ba9ac6c1-1e58-4306-b8fd-c56dca9f4ec4/operator/0.log" Jan 26 14:16:55 crc kubenswrapper[4881]: I0126 14:16:55.794647 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wmbzq_51923b46-00ba-4a5e-984d-b1f8febec058/registry-server/0.log" Jan 26 14:16:55 crc kubenswrapper[4881]: I0126 14:16:55.944418 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-pz264_5b1abb90-faa0-4b72-9d20-f84ddf952245/manager/0.log" Jan 26 14:16:56 crc kubenswrapper[4881]: I0126 14:16:56.108648 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-6wtsp_de6b2c73-a5db-4333-91e1-7722f0ba1127/manager/0.log" Jan 26 14:16:56 crc kubenswrapper[4881]: I0126 14:16:56.235795 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vgjn4_a5f220e0-8c4f-4915-b0d0-cb85cc7f7850/operator/0.log" Jan 26 14:16:56 crc kubenswrapper[4881]: I0126 14:16:56.358762 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-zxw9s_0a7aea9c-0f85-45d1-9c90-e06acb42f500/manager/0.log" Jan 26 14:16:56 crc kubenswrapper[4881]: I0126 14:16:56.623720 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-ff2c4_0591b1a9-0d5f-4f0a-beca-9ed62627012e/manager/0.log" Jan 26 14:16:56 crc kubenswrapper[4881]: I0126 14:16:56.905352 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-sqqcs_973ffd61-1f3c-4e2f-9315-dae216499f96/manager/0.log" Jan 26 14:16:57 crc kubenswrapper[4881]: I0126 14:16:57.672967 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5784f86c76-zbvz9_ab3681e4-6e5f-4f8d-909d-8d7801366f54/manager/0.log" Jan 26 14:16:57 crc kubenswrapper[4881]: I0126 14:16:57.702913 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-649ccf9654-zlvc6_74d53f54-a284-45f0-ae81-5c25d2c5cbe1/manager/0.log" Jan 26 14:17:06 crc kubenswrapper[4881]: I0126 14:17:06.082669 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:17:06 crc kubenswrapper[4881]: E0126 14:17:06.083579 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:17:16 crc kubenswrapper[4881]: I0126 14:17:16.841699 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-p7bh2_d6b7645c-9920-4793-b6aa-9a6664cc93a0/control-plane-machine-set-operator/0.log" Jan 26 14:17:17 crc kubenswrapper[4881]: I0126 14:17:17.038044 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2n5pc_1c3ab1d3-b6c8-46c7-8721-c8671d38ae03/kube-rbac-proxy/0.log" Jan 26 14:17:17 crc kubenswrapper[4881]: I0126 14:17:17.082504 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:17:17 crc kubenswrapper[4881]: E0126 14:17:17.082973 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:17:17 crc kubenswrapper[4881]: I0126 14:17:17.108001 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2n5pc_1c3ab1d3-b6c8-46c7-8721-c8671d38ae03/machine-api-operator/0.log" Jan 26 14:17:30 crc kubenswrapper[4881]: I0126 14:17:30.236364 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-hvlrt_e0a1688c-21a5-4443-9254-78b5b189c9fa/cert-manager-controller/0.log" Jan 26 14:17:30 crc kubenswrapper[4881]: I0126 14:17:30.389716 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-gmh7q_28bb7687-5041-4924-a064-a13442fc3766/cert-manager-cainjector/0.log" Jan 26 14:17:30 crc kubenswrapper[4881]: I0126 14:17:30.461026 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5bxtm_4bfa393b-f144-4c15-81f7-b2c176f31b61/cert-manager-webhook/0.log" Jan 26 14:17:32 crc kubenswrapper[4881]: I0126 14:17:32.083404 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:17:32 crc kubenswrapper[4881]: E0126 14:17:32.084207 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:17:43 crc kubenswrapper[4881]: I0126 14:17:43.379952 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-s8cmw_04037f03-d731-4b56-931b-6883929dc843/nmstate-console-plugin/0.log" Jan 26 14:17:43 crc kubenswrapper[4881]: I0126 14:17:43.551428 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hv5dq_39762078-aa2c-44ae-8ed5-4ac22ebd62be/nmstate-handler/0.log" Jan 26 14:17:43 crc kubenswrapper[4881]: I0126 14:17:43.567299 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4gxlq_2a843206-a177-4422-be4f-bf5ccbdef9f1/nmstate-metrics/0.log" Jan 26 14:17:43 crc kubenswrapper[4881]: I0126 14:17:43.567509 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4gxlq_2a843206-a177-4422-be4f-bf5ccbdef9f1/kube-rbac-proxy/0.log" Jan 26 14:17:43 crc kubenswrapper[4881]: I0126 14:17:43.720672 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-7bpbf_84cb8155-415d-4537-872c-bf03652861e0/nmstate-operator/0.log" Jan 26 14:17:43 crc kubenswrapper[4881]: I0126 14:17:43.784985 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-mp8sz_1869aca8-7499-4174-9154-588bbc7d5c24/nmstate-webhook/0.log" Jan 26 14:17:47 crc kubenswrapper[4881]: I0126 14:17:47.082619 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:17:47 crc kubenswrapper[4881]: E0126 14:17:47.083419 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:17:58 crc kubenswrapper[4881]: I0126 14:17:58.603661 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq_e46bce59-2bf0-4e4f-9988-351d4f1f6bc2/prometheus-operator-admission-webhook/0.log" Jan 26 14:17:58 crc kubenswrapper[4881]: I0126 14:17:58.615864 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-ss7tt_134b4f14-ab8f-4d19-9c5d-90f2642a285e/prometheus-operator/0.log" Jan 26 14:17:58 crc kubenswrapper[4881]: I0126 14:17:58.703292 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq_0dff4c41-61ac-4189-a67e-18689e873d2a/prometheus-operator-admission-webhook/0.log" Jan 26 14:17:58 crc kubenswrapper[4881]: I0126 14:17:58.783654 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tf2kf_b636a0bf-808d-4fce-9675-621381943903/operator/0.log" Jan 26 14:17:58 crc kubenswrapper[4881]: I0126 14:17:58.922929 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-hprwm_b23a30cc-d92b-4491-963d-9f93d3b48547/perses-operator/0.log" Jan 26 14:17:59 crc kubenswrapper[4881]: I0126 14:17:59.082554 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:17:59 crc kubenswrapper[4881]: E0126 14:17:59.082855 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:18:10 crc kubenswrapper[4881]: I0126 14:18:10.083433 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:18:10 crc kubenswrapper[4881]: E0126 14:18:10.084391 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:18:12 crc kubenswrapper[4881]: I0126 14:18:12.682930 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8vg98_901f2a44-aecd-4a72-8802-b24d3bb902af/kube-rbac-proxy/0.log" Jan 26 14:18:12 crc kubenswrapper[4881]: I0126 14:18:12.833289 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8vg98_901f2a44-aecd-4a72-8802-b24d3bb902af/controller/0.log" Jan 26 14:18:12 crc kubenswrapper[4881]: I0126 14:18:12.969756 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-frr-files/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.080207 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-frr-files/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.106025 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-metrics/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.111906 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-reloader/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.183301 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-reloader/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.385742 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-reloader/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.408570 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-frr-files/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.417778 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-metrics/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.436444 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-metrics/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.563980 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-frr-files/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.599664 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-reloader/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.634425 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/controller/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.640895 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-metrics/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.811373 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/frr-metrics/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.827825 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/kube-rbac-proxy/0.log" Jan 26 14:18:13 crc kubenswrapper[4881]: I0126 14:18:13.951022 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/kube-rbac-proxy-frr/0.log" Jan 26 14:18:14 crc kubenswrapper[4881]: I0126 14:18:14.436157 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-gmnh8_03597099-e9a6-4f59-9f54-700638dcf570/frr-k8s-webhook-server/0.log" Jan 26 14:18:14 crc kubenswrapper[4881]: I0126 14:18:14.442414 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/reloader/0.log" Jan 26 14:18:14 crc kubenswrapper[4881]: I0126 14:18:14.465982 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-79649b4ffb-kpsrh_2483eb0f-5e2f-4df8-8385-4095077aa351/manager/0.log" Jan 26 14:18:14 crc kubenswrapper[4881]: I0126 14:18:14.699563 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-64dc64df49-qlh66_1a51e914-e793-4f03-b58a-65628089e71a/webhook-server/0.log" Jan 26 14:18:14 crc kubenswrapper[4881]: I0126 14:18:14.858575 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kg9bx_0323a529-06f7-4ee1-ac63-e9226b67ae3a/kube-rbac-proxy/0.log" Jan 26 14:18:15 crc kubenswrapper[4881]: I0126 14:18:15.510468 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kg9bx_0323a529-06f7-4ee1-ac63-e9226b67ae3a/speaker/0.log" Jan 26 14:18:16 crc kubenswrapper[4881]: I0126 14:18:16.213110 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/frr/0.log" Jan 26 14:18:23 crc kubenswrapper[4881]: I0126 14:18:23.083019 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:18:23 crc kubenswrapper[4881]: E0126 14:18:23.084058 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:18:29 crc kubenswrapper[4881]: I0126 14:18:29.360563 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/util/0.log" Jan 26 14:18:29 crc kubenswrapper[4881]: I0126 14:18:29.532101 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/util/0.log" Jan 26 14:18:29 crc kubenswrapper[4881]: I0126 14:18:29.535495 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/pull/0.log" Jan 26 14:18:29 crc kubenswrapper[4881]: I0126 14:18:29.576606 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/pull/0.log" Jan 26 14:18:29 crc kubenswrapper[4881]: I0126 14:18:29.704294 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/util/0.log" Jan 26 14:18:29 crc kubenswrapper[4881]: I0126 14:18:29.716491 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/extract/0.log" Jan 26 14:18:29 crc kubenswrapper[4881]: I0126 14:18:29.717377 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/pull/0.log" Jan 26 14:18:29 crc kubenswrapper[4881]: I0126 14:18:29.859647 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/util/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.023067 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/pull/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.026312 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/util/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.035177 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/pull/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.189551 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/extract/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.221018 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/pull/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.294019 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/util/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.382819 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/util/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.623117 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/pull/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.645472 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/util/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.648191 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/pull/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.809426 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/util/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.814074 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/pull/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.845986 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/extract/0.log" Jan 26 14:18:30 crc kubenswrapper[4881]: I0126 14:18:30.976746 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-utilities/0.log" Jan 26 14:18:31 crc kubenswrapper[4881]: I0126 14:18:31.159560 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-content/0.log" Jan 26 14:18:31 crc kubenswrapper[4881]: I0126 14:18:31.160173 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-utilities/0.log" Jan 26 14:18:31 crc kubenswrapper[4881]: I0126 14:18:31.175166 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-content/0.log" Jan 26 14:18:31 crc kubenswrapper[4881]: I0126 14:18:31.383707 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-content/0.log" Jan 26 14:18:31 crc kubenswrapper[4881]: I0126 14:18:31.456369 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-utilities/0.log" Jan 26 14:18:31 crc kubenswrapper[4881]: I0126 14:18:31.584385 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-utilities/0.log" Jan 26 14:18:31 crc kubenswrapper[4881]: I0126 14:18:31.843193 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-content/0.log" Jan 26 14:18:31 crc kubenswrapper[4881]: I0126 14:18:31.857749 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-utilities/0.log" Jan 26 14:18:31 crc kubenswrapper[4881]: I0126 14:18:31.863872 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-content/0.log" Jan 26 14:18:32 crc kubenswrapper[4881]: I0126 14:18:32.201041 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/registry-server/0.log" Jan 26 14:18:32 crc kubenswrapper[4881]: I0126 14:18:32.211608 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-utilities/0.log" Jan 26 14:18:32 crc kubenswrapper[4881]: I0126 14:18:32.283871 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-content/0.log" Jan 26 14:18:32 crc kubenswrapper[4881]: I0126 14:18:32.460095 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ghn75_c3ce8c88-e7f5-461d-ad61-e035c0ca7631/marketplace-operator/0.log" Jan 26 14:18:32 crc kubenswrapper[4881]: I0126 14:18:32.696709 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-utilities/0.log" Jan 26 14:18:32 crc kubenswrapper[4881]: I0126 14:18:32.874429 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-content/0.log" Jan 26 14:18:32 crc kubenswrapper[4881]: I0126 14:18:32.888815 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-utilities/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.023234 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/registry-server/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.038011 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-content/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.170703 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-utilities/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.185183 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-content/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.351354 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-utilities/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.491417 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/registry-server/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.573811 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-utilities/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.587837 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-content/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.616113 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-content/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.797505 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-utilities/0.log" Jan 26 14:18:33 crc kubenswrapper[4881]: I0126 14:18:33.824256 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-content/0.log" Jan 26 14:18:34 crc kubenswrapper[4881]: I0126 14:18:34.726191 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/registry-server/0.log" Jan 26 14:18:35 crc kubenswrapper[4881]: I0126 14:18:35.082613 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:18:35 crc kubenswrapper[4881]: E0126 14:18:35.082878 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:18:48 crc kubenswrapper[4881]: I0126 14:18:48.169676 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-ss7tt_134b4f14-ab8f-4d19-9c5d-90f2642a285e/prometheus-operator/0.log" Jan 26 14:18:48 crc kubenswrapper[4881]: I0126 14:18:48.188017 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq_e46bce59-2bf0-4e4f-9988-351d4f1f6bc2/prometheus-operator-admission-webhook/0.log" Jan 26 14:18:48 crc kubenswrapper[4881]: I0126 14:18:48.240296 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq_0dff4c41-61ac-4189-a67e-18689e873d2a/prometheus-operator-admission-webhook/0.log" Jan 26 14:18:48 crc kubenswrapper[4881]: I0126 14:18:48.375926 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-hprwm_b23a30cc-d92b-4491-963d-9f93d3b48547/perses-operator/0.log" Jan 26 14:18:48 crc kubenswrapper[4881]: I0126 14:18:48.378156 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tf2kf_b636a0bf-808d-4fce-9675-621381943903/operator/0.log" Jan 26 14:18:50 crc kubenswrapper[4881]: I0126 14:18:50.083136 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:18:50 crc kubenswrapper[4881]: E0126 14:18:50.083917 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:19:05 crc kubenswrapper[4881]: I0126 14:19:05.083395 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:19:05 crc kubenswrapper[4881]: E0126 14:19:05.084129 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:19:12 crc kubenswrapper[4881]: E0126 14:19:12.901575 4881 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:33300->38.102.83.69:37913: write tcp 38.102.83.69:33300->38.102.83.69:37913: write: broken pipe Jan 26 14:19:18 crc kubenswrapper[4881]: I0126 14:19:18.110624 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:19:18 crc kubenswrapper[4881]: E0126 14:19:18.116083 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:19:29 crc kubenswrapper[4881]: I0126 14:19:29.082840 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:19:29 crc kubenswrapper[4881]: E0126 14:19:29.085332 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:19:40 crc kubenswrapper[4881]: I0126 14:19:40.083672 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:19:40 crc kubenswrapper[4881]: E0126 14:19:40.084603 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:19:51 crc kubenswrapper[4881]: I0126 14:19:51.083892 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:19:51 crc kubenswrapper[4881]: E0126 14:19:51.084862 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.083327 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:20:02 crc kubenswrapper[4881]: E0126 14:20:02.084220 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.553711 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hn8kb"] Jan 26 14:20:02 crc kubenswrapper[4881]: E0126 14:20:02.554106 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerName="extract-content" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.554129 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerName="extract-content" Jan 26 14:20:02 crc kubenswrapper[4881]: E0126 14:20:02.554154 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerName="registry-server" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.554160 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerName="registry-server" Jan 26 14:20:02 crc kubenswrapper[4881]: E0126 14:20:02.554180 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerName="extract-utilities" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.554187 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerName="extract-utilities" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.554362 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c086439c-46f2-4b30-bf54-4a1e71f78f6f" containerName="registry-server" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.555815 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.578232 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn8kb"] Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.686950 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-catalog-content\") pod \"community-operators-hn8kb\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.687090 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bb9\" (UniqueName: \"kubernetes.io/projected/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-kube-api-access-l6bb9\") pod \"community-operators-hn8kb\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.687240 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-utilities\") pod \"community-operators-hn8kb\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.798852 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-catalog-content\") pod \"community-operators-hn8kb\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.799254 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6bb9\" (UniqueName: \"kubernetes.io/projected/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-kube-api-access-l6bb9\") pod \"community-operators-hn8kb\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.799645 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-utilities\") pod \"community-operators-hn8kb\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.801077 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-catalog-content\") pod \"community-operators-hn8kb\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.804266 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-utilities\") pod \"community-operators-hn8kb\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.834816 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6bb9\" (UniqueName: \"kubernetes.io/projected/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-kube-api-access-l6bb9\") pod \"community-operators-hn8kb\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:02 crc kubenswrapper[4881]: I0126 14:20:02.879055 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:03 crc kubenswrapper[4881]: I0126 14:20:03.206173 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn8kb"] Jan 26 14:20:03 crc kubenswrapper[4881]: I0126 14:20:03.580398 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8kb" event={"ID":"a6af63b4-c01b-45f5-bd38-8f17307c2ee8","Type":"ContainerStarted","Data":"0e92be1390d08c76e09d6c5d560459da9cff0c337d9151f3b061b9cd4d07d3ec"} Jan 26 14:20:06 crc kubenswrapper[4881]: I0126 14:20:06.614188 4881 generic.go:334] "Generic (PLEG): container finished" podID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerID="792a225eab1c40933b4072288cae9fcfabb474776cfbe5acddd63273dc6270f0" exitCode=0 Jan 26 14:20:06 crc kubenswrapper[4881]: I0126 14:20:06.614285 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8kb" event={"ID":"a6af63b4-c01b-45f5-bd38-8f17307c2ee8","Type":"ContainerDied","Data":"792a225eab1c40933b4072288cae9fcfabb474776cfbe5acddd63273dc6270f0"} Jan 26 14:20:06 crc kubenswrapper[4881]: I0126 14:20:06.620282 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:20:08 crc kubenswrapper[4881]: I0126 14:20:08.636651 4881 generic.go:334] "Generic (PLEG): container finished" podID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerID="e119d2601e4c228813ae2d2f4979d049b5506c23236fb137870f04faae066934" exitCode=0 Jan 26 14:20:08 crc kubenswrapper[4881]: I0126 14:20:08.637007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8kb" event={"ID":"a6af63b4-c01b-45f5-bd38-8f17307c2ee8","Type":"ContainerDied","Data":"e119d2601e4c228813ae2d2f4979d049b5506c23236fb137870f04faae066934"} Jan 26 14:20:10 crc kubenswrapper[4881]: I0126 14:20:10.661981 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8kb" event={"ID":"a6af63b4-c01b-45f5-bd38-8f17307c2ee8","Type":"ContainerStarted","Data":"15dd0eadc2f56da06636ae60dbd36bdc026bee44850962e4eb987cbb8e1d27a2"} Jan 26 14:20:10 crc kubenswrapper[4881]: I0126 14:20:10.686297 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hn8kb" podStartSLOduration=5.89592037 podStartE2EDuration="8.686282564s" podCreationTimestamp="2026-01-26 14:20:02 +0000 UTC" firstStartedPulling="2026-01-26 14:20:06.619985701 +0000 UTC m=+6279.099295727" lastFinishedPulling="2026-01-26 14:20:09.410347885 +0000 UTC m=+6281.889657921" observedRunningTime="2026-01-26 14:20:10.681965959 +0000 UTC m=+6283.161275985" watchObservedRunningTime="2026-01-26 14:20:10.686282564 +0000 UTC m=+6283.165592590" Jan 26 14:20:12 crc kubenswrapper[4881]: I0126 14:20:12.879925 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:12 crc kubenswrapper[4881]: I0126 14:20:12.881664 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:12 crc kubenswrapper[4881]: I0126 14:20:12.949264 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:13 crc kubenswrapper[4881]: I0126 14:20:13.082617 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:20:13 crc kubenswrapper[4881]: E0126 14:20:13.082864 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:20:22 crc kubenswrapper[4881]: I0126 14:20:22.945126 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:22 crc kubenswrapper[4881]: I0126 14:20:22.997328 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hn8kb"] Jan 26 14:20:23 crc kubenswrapper[4881]: I0126 14:20:23.799212 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hn8kb" podUID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerName="registry-server" containerID="cri-o://15dd0eadc2f56da06636ae60dbd36bdc026bee44850962e4eb987cbb8e1d27a2" gracePeriod=2 Jan 26 14:20:24 crc kubenswrapper[4881]: I0126 14:20:24.813272 4881 generic.go:334] "Generic (PLEG): container finished" podID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerID="15dd0eadc2f56da06636ae60dbd36bdc026bee44850962e4eb987cbb8e1d27a2" exitCode=0 Jan 26 14:20:24 crc kubenswrapper[4881]: I0126 14:20:24.813565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8kb" event={"ID":"a6af63b4-c01b-45f5-bd38-8f17307c2ee8","Type":"ContainerDied","Data":"15dd0eadc2f56da06636ae60dbd36bdc026bee44850962e4eb987cbb8e1d27a2"} Jan 26 14:20:24 crc kubenswrapper[4881]: I0126 14:20:24.956860 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.122499 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-catalog-content\") pod \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.122774 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-utilities\") pod \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.122825 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6bb9\" (UniqueName: \"kubernetes.io/projected/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-kube-api-access-l6bb9\") pod \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\" (UID: \"a6af63b4-c01b-45f5-bd38-8f17307c2ee8\") " Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.123427 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-utilities" (OuterVolumeSpecName: "utilities") pod "a6af63b4-c01b-45f5-bd38-8f17307c2ee8" (UID: "a6af63b4-c01b-45f5-bd38-8f17307c2ee8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.134326 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-kube-api-access-l6bb9" (OuterVolumeSpecName: "kube-api-access-l6bb9") pod "a6af63b4-c01b-45f5-bd38-8f17307c2ee8" (UID: "a6af63b4-c01b-45f5-bd38-8f17307c2ee8"). InnerVolumeSpecName "kube-api-access-l6bb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.188397 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6af63b4-c01b-45f5-bd38-8f17307c2ee8" (UID: "a6af63b4-c01b-45f5-bd38-8f17307c2ee8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.225326 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.225365 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6bb9\" (UniqueName: \"kubernetes.io/projected/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-kube-api-access-l6bb9\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.225379 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6af63b4-c01b-45f5-bd38-8f17307c2ee8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.827691 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn8kb" event={"ID":"a6af63b4-c01b-45f5-bd38-8f17307c2ee8","Type":"ContainerDied","Data":"0e92be1390d08c76e09d6c5d560459da9cff0c337d9151f3b061b9cd4d07d3ec"} Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.827772 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn8kb" Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.827781 4881 scope.go:117] "RemoveContainer" containerID="15dd0eadc2f56da06636ae60dbd36bdc026bee44850962e4eb987cbb8e1d27a2" Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.879699 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hn8kb"] Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.879945 4881 scope.go:117] "RemoveContainer" containerID="e119d2601e4c228813ae2d2f4979d049b5506c23236fb137870f04faae066934" Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.889921 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hn8kb"] Jan 26 14:20:25 crc kubenswrapper[4881]: I0126 14:20:25.923350 4881 scope.go:117] "RemoveContainer" containerID="792a225eab1c40933b4072288cae9fcfabb474776cfbe5acddd63273dc6270f0" Jan 26 14:20:26 crc kubenswrapper[4881]: I0126 14:20:26.104228 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" path="/var/lib/kubelet/pods/a6af63b4-c01b-45f5-bd38-8f17307c2ee8/volumes" Jan 26 14:20:28 crc kubenswrapper[4881]: I0126 14:20:28.097677 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:20:28 crc kubenswrapper[4881]: E0126 14:20:28.098580 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:20:42 crc kubenswrapper[4881]: I0126 14:20:42.084195 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:20:42 crc kubenswrapper[4881]: E0126 14:20:42.087900 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:20:53 crc kubenswrapper[4881]: I0126 14:20:53.082391 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:20:53 crc kubenswrapper[4881]: E0126 14:20:53.083282 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:20:54 crc kubenswrapper[4881]: I0126 14:20:54.150369 4881 generic.go:334] "Generic (PLEG): container finished" podID="b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" containerID="4f10e897c691262ddcc233d9e3ebb983535be0911f77560e896b758ed185744e" exitCode=0 Jan 26 14:20:54 crc kubenswrapper[4881]: I0126 14:20:54.150458 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5hb8g/must-gather-rxwnc" event={"ID":"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46","Type":"ContainerDied","Data":"4f10e897c691262ddcc233d9e3ebb983535be0911f77560e896b758ed185744e"} Jan 26 14:20:54 crc kubenswrapper[4881]: I0126 14:20:54.151351 4881 scope.go:117] "RemoveContainer" containerID="4f10e897c691262ddcc233d9e3ebb983535be0911f77560e896b758ed185744e" Jan 26 14:20:54 crc kubenswrapper[4881]: I0126 14:20:54.777635 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5hb8g_must-gather-rxwnc_b75ec1ea-964a-4b6b-8c98-fe19fbb11e46/gather/0.log" Jan 26 14:21:03 crc kubenswrapper[4881]: I0126 14:21:03.966211 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5hb8g/must-gather-rxwnc"] Jan 26 14:21:03 crc kubenswrapper[4881]: I0126 14:21:03.966962 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5hb8g/must-gather-rxwnc" podUID="b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" containerName="copy" containerID="cri-o://9bf015b0278d9030509b3dd5ec9636bd5f979326a3b7b21b5d93b3e13b8b8e81" gracePeriod=2 Jan 26 14:21:03 crc kubenswrapper[4881]: I0126 14:21:03.981691 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5hb8g/must-gather-rxwnc"] Jan 26 14:21:04 crc kubenswrapper[4881]: I0126 14:21:04.262049 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5hb8g_must-gather-rxwnc_b75ec1ea-964a-4b6b-8c98-fe19fbb11e46/copy/0.log" Jan 26 14:21:04 crc kubenswrapper[4881]: I0126 14:21:04.262433 4881 generic.go:334] "Generic (PLEG): container finished" podID="b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" containerID="9bf015b0278d9030509b3dd5ec9636bd5f979326a3b7b21b5d93b3e13b8b8e81" exitCode=143 Jan 26 14:21:04 crc kubenswrapper[4881]: I0126 14:21:04.460954 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5hb8g_must-gather-rxwnc_b75ec1ea-964a-4b6b-8c98-fe19fbb11e46/copy/0.log" Jan 26 14:21:04 crc kubenswrapper[4881]: I0126 14:21:04.461846 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/must-gather-rxwnc" Jan 26 14:21:04 crc kubenswrapper[4881]: I0126 14:21:04.512005 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdv4q\" (UniqueName: \"kubernetes.io/projected/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-kube-api-access-jdv4q\") pod \"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46\" (UID: \"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46\") " Jan 26 14:21:04 crc kubenswrapper[4881]: I0126 14:21:04.512090 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-must-gather-output\") pod \"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46\" (UID: \"b75ec1ea-964a-4b6b-8c98-fe19fbb11e46\") " Jan 26 14:21:04 crc kubenswrapper[4881]: I0126 14:21:04.525226 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-kube-api-access-jdv4q" (OuterVolumeSpecName: "kube-api-access-jdv4q") pod "b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" (UID: "b75ec1ea-964a-4b6b-8c98-fe19fbb11e46"). InnerVolumeSpecName "kube-api-access-jdv4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:21:04 crc kubenswrapper[4881]: I0126 14:21:04.614923 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdv4q\" (UniqueName: \"kubernetes.io/projected/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-kube-api-access-jdv4q\") on node \"crc\" DevicePath \"\"" Jan 26 14:21:04 crc kubenswrapper[4881]: I0126 14:21:04.727325 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" (UID: "b75ec1ea-964a-4b6b-8c98-fe19fbb11e46"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:21:04 crc kubenswrapper[4881]: I0126 14:21:04.819406 4881 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 14:21:05 crc kubenswrapper[4881]: I0126 14:21:05.273409 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5hb8g_must-gather-rxwnc_b75ec1ea-964a-4b6b-8c98-fe19fbb11e46/copy/0.log" Jan 26 14:21:05 crc kubenswrapper[4881]: I0126 14:21:05.273914 4881 scope.go:117] "RemoveContainer" containerID="9bf015b0278d9030509b3dd5ec9636bd5f979326a3b7b21b5d93b3e13b8b8e81" Jan 26 14:21:05 crc kubenswrapper[4881]: I0126 14:21:05.273992 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5hb8g/must-gather-rxwnc" Jan 26 14:21:05 crc kubenswrapper[4881]: I0126 14:21:05.309396 4881 scope.go:117] "RemoveContainer" containerID="4f10e897c691262ddcc233d9e3ebb983535be0911f77560e896b758ed185744e" Jan 26 14:21:06 crc kubenswrapper[4881]: I0126 14:21:06.083259 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:21:06 crc kubenswrapper[4881]: I0126 14:21:06.099301 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" path="/var/lib/kubelet/pods/b75ec1ea-964a-4b6b-8c98-fe19fbb11e46/volumes" Jan 26 14:21:07 crc kubenswrapper[4881]: I0126 14:21:07.300770 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"ae9424ee3a3f44134ddf9f7f1716ea85c1086060f07bf5296ffa39b6632fe2d5"} Jan 26 14:21:57 crc kubenswrapper[4881]: I0126 14:21:57.855186 4881 scope.go:117] "RemoveContainer" containerID="7b75404309917a4694a61b035b9f50e601af95edad7ab65dea48d3e2748cd7d3" Jan 26 14:23:24 crc kubenswrapper[4881]: I0126 14:23:24.789731 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:23:24 crc kubenswrapper[4881]: I0126 14:23:24.790414 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:23:54 crc kubenswrapper[4881]: I0126 14:23:54.790054 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:23:54 crc kubenswrapper[4881]: I0126 14:23:54.790626 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.691956 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ztph7/must-gather-klclw"] Jan 26 14:24:14 crc kubenswrapper[4881]: E0126 14:24:14.693022 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerName="extract-utilities" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.693039 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerName="extract-utilities" Jan 26 14:24:14 crc kubenswrapper[4881]: E0126 14:24:14.693062 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerName="extract-content" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.693071 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerName="extract-content" Jan 26 14:24:14 crc kubenswrapper[4881]: E0126 14:24:14.693081 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" containerName="gather" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.693089 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" containerName="gather" Jan 26 14:24:14 crc kubenswrapper[4881]: E0126 14:24:14.693105 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" containerName="copy" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.693112 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" containerName="copy" Jan 26 14:24:14 crc kubenswrapper[4881]: E0126 14:24:14.693156 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerName="registry-server" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.693164 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerName="registry-server" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.693394 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6af63b4-c01b-45f5-bd38-8f17307c2ee8" containerName="registry-server" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.693413 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" containerName="copy" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.693427 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b75ec1ea-964a-4b6b-8c98-fe19fbb11e46" containerName="gather" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.701147 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/must-gather-klclw" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.702957 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ztph7"/"kube-root-ca.crt" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.703238 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ztph7"/"openshift-service-ca.crt" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.708610 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ztph7/must-gather-klclw"] Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.840442 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4k29\" (UniqueName: \"kubernetes.io/projected/a7be9b22-84f5-4bd5-995e-86a8fe91102e-kube-api-access-s4k29\") pod \"must-gather-klclw\" (UID: \"a7be9b22-84f5-4bd5-995e-86a8fe91102e\") " pod="openshift-must-gather-ztph7/must-gather-klclw" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.840807 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a7be9b22-84f5-4bd5-995e-86a8fe91102e-must-gather-output\") pod \"must-gather-klclw\" (UID: \"a7be9b22-84f5-4bd5-995e-86a8fe91102e\") " pod="openshift-must-gather-ztph7/must-gather-klclw" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.942853 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4k29\" (UniqueName: \"kubernetes.io/projected/a7be9b22-84f5-4bd5-995e-86a8fe91102e-kube-api-access-s4k29\") pod \"must-gather-klclw\" (UID: \"a7be9b22-84f5-4bd5-995e-86a8fe91102e\") " pod="openshift-must-gather-ztph7/must-gather-klclw" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.942951 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a7be9b22-84f5-4bd5-995e-86a8fe91102e-must-gather-output\") pod \"must-gather-klclw\" (UID: \"a7be9b22-84f5-4bd5-995e-86a8fe91102e\") " pod="openshift-must-gather-ztph7/must-gather-klclw" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.943451 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a7be9b22-84f5-4bd5-995e-86a8fe91102e-must-gather-output\") pod \"must-gather-klclw\" (UID: \"a7be9b22-84f5-4bd5-995e-86a8fe91102e\") " pod="openshift-must-gather-ztph7/must-gather-klclw" Jan 26 14:24:14 crc kubenswrapper[4881]: I0126 14:24:14.962103 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4k29\" (UniqueName: \"kubernetes.io/projected/a7be9b22-84f5-4bd5-995e-86a8fe91102e-kube-api-access-s4k29\") pod \"must-gather-klclw\" (UID: \"a7be9b22-84f5-4bd5-995e-86a8fe91102e\") " pod="openshift-must-gather-ztph7/must-gather-klclw" Jan 26 14:24:15 crc kubenswrapper[4881]: I0126 14:24:15.019903 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/must-gather-klclw" Jan 26 14:24:15 crc kubenswrapper[4881]: I0126 14:24:15.656910 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ztph7/must-gather-klclw"] Jan 26 14:24:16 crc kubenswrapper[4881]: I0126 14:24:16.467420 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/must-gather-klclw" event={"ID":"a7be9b22-84f5-4bd5-995e-86a8fe91102e","Type":"ContainerStarted","Data":"e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a"} Jan 26 14:24:16 crc kubenswrapper[4881]: I0126 14:24:16.468032 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/must-gather-klclw" event={"ID":"a7be9b22-84f5-4bd5-995e-86a8fe91102e","Type":"ContainerStarted","Data":"d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e"} Jan 26 14:24:16 crc kubenswrapper[4881]: I0126 14:24:16.468046 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/must-gather-klclw" event={"ID":"a7be9b22-84f5-4bd5-995e-86a8fe91102e","Type":"ContainerStarted","Data":"145a0f89273121aa0eebfec2c059872740b93100ee532dd2d1f758d46e1a454d"} Jan 26 14:24:16 crc kubenswrapper[4881]: I0126 14:24:16.487210 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ztph7/must-gather-klclw" podStartSLOduration=2.487187492 podStartE2EDuration="2.487187492s" podCreationTimestamp="2026-01-26 14:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:24:16.483841601 +0000 UTC m=+6528.963151637" watchObservedRunningTime="2026-01-26 14:24:16.487187492 +0000 UTC m=+6528.966497528" Jan 26 14:24:19 crc kubenswrapper[4881]: I0126 14:24:19.849185 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ztph7/crc-debug-z4d5q"] Jan 26 14:24:19 crc kubenswrapper[4881]: I0126 14:24:19.850781 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-z4d5q" Jan 26 14:24:19 crc kubenswrapper[4881]: I0126 14:24:19.854665 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ztph7"/"default-dockercfg-ssjbp" Jan 26 14:24:19 crc kubenswrapper[4881]: I0126 14:24:19.952394 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3570cb38-d6c3-4eab-835d-9d10c03d64c9-host\") pod \"crc-debug-z4d5q\" (UID: \"3570cb38-d6c3-4eab-835d-9d10c03d64c9\") " pod="openshift-must-gather-ztph7/crc-debug-z4d5q" Jan 26 14:24:19 crc kubenswrapper[4881]: I0126 14:24:19.952456 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqsq8\" (UniqueName: \"kubernetes.io/projected/3570cb38-d6c3-4eab-835d-9d10c03d64c9-kube-api-access-fqsq8\") pod \"crc-debug-z4d5q\" (UID: \"3570cb38-d6c3-4eab-835d-9d10c03d64c9\") " pod="openshift-must-gather-ztph7/crc-debug-z4d5q" Jan 26 14:24:20 crc kubenswrapper[4881]: I0126 14:24:20.053777 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3570cb38-d6c3-4eab-835d-9d10c03d64c9-host\") pod \"crc-debug-z4d5q\" (UID: \"3570cb38-d6c3-4eab-835d-9d10c03d64c9\") " pod="openshift-must-gather-ztph7/crc-debug-z4d5q" Jan 26 14:24:20 crc kubenswrapper[4881]: I0126 14:24:20.053843 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqsq8\" (UniqueName: \"kubernetes.io/projected/3570cb38-d6c3-4eab-835d-9d10c03d64c9-kube-api-access-fqsq8\") pod \"crc-debug-z4d5q\" (UID: \"3570cb38-d6c3-4eab-835d-9d10c03d64c9\") " pod="openshift-must-gather-ztph7/crc-debug-z4d5q" Jan 26 14:24:20 crc kubenswrapper[4881]: I0126 14:24:20.053888 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3570cb38-d6c3-4eab-835d-9d10c03d64c9-host\") pod \"crc-debug-z4d5q\" (UID: \"3570cb38-d6c3-4eab-835d-9d10c03d64c9\") " pod="openshift-must-gather-ztph7/crc-debug-z4d5q" Jan 26 14:24:20 crc kubenswrapper[4881]: I0126 14:24:20.080762 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqsq8\" (UniqueName: \"kubernetes.io/projected/3570cb38-d6c3-4eab-835d-9d10c03d64c9-kube-api-access-fqsq8\") pod \"crc-debug-z4d5q\" (UID: \"3570cb38-d6c3-4eab-835d-9d10c03d64c9\") " pod="openshift-must-gather-ztph7/crc-debug-z4d5q" Jan 26 14:24:20 crc kubenswrapper[4881]: I0126 14:24:20.168747 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-z4d5q" Jan 26 14:24:20 crc kubenswrapper[4881]: I0126 14:24:20.502627 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/crc-debug-z4d5q" event={"ID":"3570cb38-d6c3-4eab-835d-9d10c03d64c9","Type":"ContainerStarted","Data":"c0ba5ab97826a2e60d12f51ca2ff4f8befceba8db4820089f03d98993c03751e"} Jan 26 14:24:20 crc kubenswrapper[4881]: I0126 14:24:20.503208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/crc-debug-z4d5q" event={"ID":"3570cb38-d6c3-4eab-835d-9d10c03d64c9","Type":"ContainerStarted","Data":"8abcfa2397b379391b402eebf6b1f85fbcad7d16167595cb6bbd6b3a2b7a016c"} Jan 26 14:24:20 crc kubenswrapper[4881]: I0126 14:24:20.522706 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ztph7/crc-debug-z4d5q" podStartSLOduration=1.522685114 podStartE2EDuration="1.522685114s" podCreationTimestamp="2026-01-26 14:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:24:20.516068254 +0000 UTC m=+6532.995378280" watchObservedRunningTime="2026-01-26 14:24:20.522685114 +0000 UTC m=+6533.001995140" Jan 26 14:24:24 crc kubenswrapper[4881]: I0126 14:24:24.789266 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:24:24 crc kubenswrapper[4881]: I0126 14:24:24.789841 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:24:24 crc kubenswrapper[4881]: I0126 14:24:24.789884 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 14:24:24 crc kubenswrapper[4881]: I0126 14:24:24.790626 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae9424ee3a3f44134ddf9f7f1716ea85c1086060f07bf5296ffa39b6632fe2d5"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:24:24 crc kubenswrapper[4881]: I0126 14:24:24.790671 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://ae9424ee3a3f44134ddf9f7f1716ea85c1086060f07bf5296ffa39b6632fe2d5" gracePeriod=600 Jan 26 14:24:25 crc kubenswrapper[4881]: I0126 14:24:25.558532 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="ae9424ee3a3f44134ddf9f7f1716ea85c1086060f07bf5296ffa39b6632fe2d5" exitCode=0 Jan 26 14:24:25 crc kubenswrapper[4881]: I0126 14:24:25.558643 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"ae9424ee3a3f44134ddf9f7f1716ea85c1086060f07bf5296ffa39b6632fe2d5"} Jan 26 14:24:25 crc kubenswrapper[4881]: I0126 14:24:25.559007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062"} Jan 26 14:24:25 crc kubenswrapper[4881]: I0126 14:24:25.559029 4881 scope.go:117] "RemoveContainer" containerID="1251b363555e67f84d877e4ffce511b7b1e8b62c231e6ee1f8d16a09aba0e12b" Jan 26 14:24:58 crc kubenswrapper[4881]: I0126 14:24:58.908062 4881 generic.go:334] "Generic (PLEG): container finished" podID="3570cb38-d6c3-4eab-835d-9d10c03d64c9" containerID="c0ba5ab97826a2e60d12f51ca2ff4f8befceba8db4820089f03d98993c03751e" exitCode=0 Jan 26 14:24:58 crc kubenswrapper[4881]: I0126 14:24:58.908177 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/crc-debug-z4d5q" event={"ID":"3570cb38-d6c3-4eab-835d-9d10c03d64c9","Type":"ContainerDied","Data":"c0ba5ab97826a2e60d12f51ca2ff4f8befceba8db4820089f03d98993c03751e"} Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.024738 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-z4d5q" Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.064736 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ztph7/crc-debug-z4d5q"] Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.074470 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ztph7/crc-debug-z4d5q"] Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.104908 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3570cb38-d6c3-4eab-835d-9d10c03d64c9-host\") pod \"3570cb38-d6c3-4eab-835d-9d10c03d64c9\" (UID: \"3570cb38-d6c3-4eab-835d-9d10c03d64c9\") " Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.104985 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsq8\" (UniqueName: \"kubernetes.io/projected/3570cb38-d6c3-4eab-835d-9d10c03d64c9-kube-api-access-fqsq8\") pod \"3570cb38-d6c3-4eab-835d-9d10c03d64c9\" (UID: \"3570cb38-d6c3-4eab-835d-9d10c03d64c9\") " Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.105034 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3570cb38-d6c3-4eab-835d-9d10c03d64c9-host" (OuterVolumeSpecName: "host") pod "3570cb38-d6c3-4eab-835d-9d10c03d64c9" (UID: "3570cb38-d6c3-4eab-835d-9d10c03d64c9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.105575 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3570cb38-d6c3-4eab-835d-9d10c03d64c9-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.110639 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3570cb38-d6c3-4eab-835d-9d10c03d64c9-kube-api-access-fqsq8" (OuterVolumeSpecName: "kube-api-access-fqsq8") pod "3570cb38-d6c3-4eab-835d-9d10c03d64c9" (UID: "3570cb38-d6c3-4eab-835d-9d10c03d64c9"). InnerVolumeSpecName "kube-api-access-fqsq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.207773 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsq8\" (UniqueName: \"kubernetes.io/projected/3570cb38-d6c3-4eab-835d-9d10c03d64c9-kube-api-access-fqsq8\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.933404 4881 scope.go:117] "RemoveContainer" containerID="c0ba5ab97826a2e60d12f51ca2ff4f8befceba8db4820089f03d98993c03751e" Jan 26 14:25:00 crc kubenswrapper[4881]: I0126 14:25:00.933456 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-z4d5q" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.259406 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ztph7/crc-debug-jbwtl"] Jan 26 14:25:01 crc kubenswrapper[4881]: E0126 14:25:01.259899 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3570cb38-d6c3-4eab-835d-9d10c03d64c9" containerName="container-00" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.259913 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3570cb38-d6c3-4eab-835d-9d10c03d64c9" containerName="container-00" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.260170 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3570cb38-d6c3-4eab-835d-9d10c03d64c9" containerName="container-00" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.263994 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-jbwtl" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.269765 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ztph7"/"default-dockercfg-ssjbp" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.328869 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-host\") pod \"crc-debug-jbwtl\" (UID: \"7ac25d2b-87f1-4b38-b1b6-7b811e35a009\") " pod="openshift-must-gather-ztph7/crc-debug-jbwtl" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.328964 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lnkb\" (UniqueName: \"kubernetes.io/projected/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-kube-api-access-6lnkb\") pod \"crc-debug-jbwtl\" (UID: \"7ac25d2b-87f1-4b38-b1b6-7b811e35a009\") " pod="openshift-must-gather-ztph7/crc-debug-jbwtl" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.431198 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-host\") pod \"crc-debug-jbwtl\" (UID: \"7ac25d2b-87f1-4b38-b1b6-7b811e35a009\") " pod="openshift-must-gather-ztph7/crc-debug-jbwtl" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.431555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lnkb\" (UniqueName: \"kubernetes.io/projected/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-kube-api-access-6lnkb\") pod \"crc-debug-jbwtl\" (UID: \"7ac25d2b-87f1-4b38-b1b6-7b811e35a009\") " pod="openshift-must-gather-ztph7/crc-debug-jbwtl" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.431363 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-host\") pod \"crc-debug-jbwtl\" (UID: \"7ac25d2b-87f1-4b38-b1b6-7b811e35a009\") " pod="openshift-must-gather-ztph7/crc-debug-jbwtl" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.458418 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lnkb\" (UniqueName: \"kubernetes.io/projected/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-kube-api-access-6lnkb\") pod \"crc-debug-jbwtl\" (UID: \"7ac25d2b-87f1-4b38-b1b6-7b811e35a009\") " pod="openshift-must-gather-ztph7/crc-debug-jbwtl" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.594924 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-jbwtl" Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.956733 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/crc-debug-jbwtl" event={"ID":"7ac25d2b-87f1-4b38-b1b6-7b811e35a009","Type":"ContainerStarted","Data":"aa44870ffcf89f96ceded700e9a58d59d40da44b4682c00111c21ca5d1d6516d"} Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.957071 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/crc-debug-jbwtl" event={"ID":"7ac25d2b-87f1-4b38-b1b6-7b811e35a009","Type":"ContainerStarted","Data":"b29c6e542d6b9c6ee95f4aad5b44d263611b2353222d6b7eabd0b94cea38e7de"} Jan 26 14:25:01 crc kubenswrapper[4881]: I0126 14:25:01.969254 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ztph7/crc-debug-jbwtl" podStartSLOduration=0.969239308 podStartE2EDuration="969.239308ms" podCreationTimestamp="2026-01-26 14:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:25:01.96642508 +0000 UTC m=+6574.445735126" watchObservedRunningTime="2026-01-26 14:25:01.969239308 +0000 UTC m=+6574.448549334" Jan 26 14:25:02 crc kubenswrapper[4881]: I0126 14:25:02.104699 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3570cb38-d6c3-4eab-835d-9d10c03d64c9" path="/var/lib/kubelet/pods/3570cb38-d6c3-4eab-835d-9d10c03d64c9/volumes" Jan 26 14:25:02 crc kubenswrapper[4881]: I0126 14:25:02.965561 4881 generic.go:334] "Generic (PLEG): container finished" podID="7ac25d2b-87f1-4b38-b1b6-7b811e35a009" containerID="aa44870ffcf89f96ceded700e9a58d59d40da44b4682c00111c21ca5d1d6516d" exitCode=0 Jan 26 14:25:02 crc kubenswrapper[4881]: I0126 14:25:02.965737 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/crc-debug-jbwtl" event={"ID":"7ac25d2b-87f1-4b38-b1b6-7b811e35a009","Type":"ContainerDied","Data":"aa44870ffcf89f96ceded700e9a58d59d40da44b4682c00111c21ca5d1d6516d"} Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.078062 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-jbwtl" Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.178488 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lnkb\" (UniqueName: \"kubernetes.io/projected/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-kube-api-access-6lnkb\") pod \"7ac25d2b-87f1-4b38-b1b6-7b811e35a009\" (UID: \"7ac25d2b-87f1-4b38-b1b6-7b811e35a009\") " Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.178747 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-host\") pod \"7ac25d2b-87f1-4b38-b1b6-7b811e35a009\" (UID: \"7ac25d2b-87f1-4b38-b1b6-7b811e35a009\") " Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.180389 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-host" (OuterVolumeSpecName: "host") pod "7ac25d2b-87f1-4b38-b1b6-7b811e35a009" (UID: "7ac25d2b-87f1-4b38-b1b6-7b811e35a009"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.196111 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-kube-api-access-6lnkb" (OuterVolumeSpecName: "kube-api-access-6lnkb") pod "7ac25d2b-87f1-4b38-b1b6-7b811e35a009" (UID: "7ac25d2b-87f1-4b38-b1b6-7b811e35a009"). InnerVolumeSpecName "kube-api-access-6lnkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.281745 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.281784 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lnkb\" (UniqueName: \"kubernetes.io/projected/7ac25d2b-87f1-4b38-b1b6-7b811e35a009-kube-api-access-6lnkb\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.465158 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ztph7/crc-debug-jbwtl"] Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.472756 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ztph7/crc-debug-jbwtl"] Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.982281 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b29c6e542d6b9c6ee95f4aad5b44d263611b2353222d6b7eabd0b94cea38e7de" Jan 26 14:25:04 crc kubenswrapper[4881]: I0126 14:25:04.982354 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-jbwtl" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.681489 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ztph7/crc-debug-fck86"] Jan 26 14:25:05 crc kubenswrapper[4881]: E0126 14:25:05.682294 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac25d2b-87f1-4b38-b1b6-7b811e35a009" containerName="container-00" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.682310 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac25d2b-87f1-4b38-b1b6-7b811e35a009" containerName="container-00" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.682610 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac25d2b-87f1-4b38-b1b6-7b811e35a009" containerName="container-00" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.683462 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-fck86" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.689191 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ztph7"/"default-dockercfg-ssjbp" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.810480 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-host\") pod \"crc-debug-fck86\" (UID: \"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef\") " pod="openshift-must-gather-ztph7/crc-debug-fck86" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.810572 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7ltr\" (UniqueName: \"kubernetes.io/projected/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-kube-api-access-p7ltr\") pod \"crc-debug-fck86\" (UID: \"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef\") " pod="openshift-must-gather-ztph7/crc-debug-fck86" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.912207 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-host\") pod \"crc-debug-fck86\" (UID: \"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef\") " pod="openshift-must-gather-ztph7/crc-debug-fck86" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.912271 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ltr\" (UniqueName: \"kubernetes.io/projected/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-kube-api-access-p7ltr\") pod \"crc-debug-fck86\" (UID: \"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef\") " pod="openshift-must-gather-ztph7/crc-debug-fck86" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.912331 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-host\") pod \"crc-debug-fck86\" (UID: \"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef\") " pod="openshift-must-gather-ztph7/crc-debug-fck86" Jan 26 14:25:05 crc kubenswrapper[4881]: I0126 14:25:05.929491 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7ltr\" (UniqueName: \"kubernetes.io/projected/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-kube-api-access-p7ltr\") pod \"crc-debug-fck86\" (UID: \"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef\") " pod="openshift-must-gather-ztph7/crc-debug-fck86" Jan 26 14:25:06 crc kubenswrapper[4881]: I0126 14:25:06.005308 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-fck86" Jan 26 14:25:06 crc kubenswrapper[4881]: I0126 14:25:06.097513 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac25d2b-87f1-4b38-b1b6-7b811e35a009" path="/var/lib/kubelet/pods/7ac25d2b-87f1-4b38-b1b6-7b811e35a009/volumes" Jan 26 14:25:07 crc kubenswrapper[4881]: I0126 14:25:07.001966 4881 generic.go:334] "Generic (PLEG): container finished" podID="ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef" containerID="bc09879e2a0afe4d4fcd071884939d340e0a315a7f71d86c2090ef91e4ee3650" exitCode=0 Jan 26 14:25:07 crc kubenswrapper[4881]: I0126 14:25:07.002054 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/crc-debug-fck86" event={"ID":"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef","Type":"ContainerDied","Data":"bc09879e2a0afe4d4fcd071884939d340e0a315a7f71d86c2090ef91e4ee3650"} Jan 26 14:25:07 crc kubenswrapper[4881]: I0126 14:25:07.002369 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/crc-debug-fck86" event={"ID":"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef","Type":"ContainerStarted","Data":"0c8e0bc9d2cd429f666507e9d1fbf74ec1b55d898fd4b4a6ea9ccd5a4bffa291"} Jan 26 14:25:07 crc kubenswrapper[4881]: I0126 14:25:07.056892 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ztph7/crc-debug-fck86"] Jan 26 14:25:07 crc kubenswrapper[4881]: I0126 14:25:07.072656 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ztph7/crc-debug-fck86"] Jan 26 14:25:08 crc kubenswrapper[4881]: I0126 14:25:08.127092 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-fck86" Jan 26 14:25:08 crc kubenswrapper[4881]: I0126 14:25:08.300015 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-host\") pod \"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef\" (UID: \"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef\") " Jan 26 14:25:08 crc kubenswrapper[4881]: I0126 14:25:08.300171 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-host" (OuterVolumeSpecName: "host") pod "ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef" (UID: "ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:25:08 crc kubenswrapper[4881]: I0126 14:25:08.300680 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7ltr\" (UniqueName: \"kubernetes.io/projected/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-kube-api-access-p7ltr\") pod \"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef\" (UID: \"ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef\") " Jan 26 14:25:08 crc kubenswrapper[4881]: I0126 14:25:08.301906 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:08 crc kubenswrapper[4881]: I0126 14:25:08.311780 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-kube-api-access-p7ltr" (OuterVolumeSpecName: "kube-api-access-p7ltr") pod "ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef" (UID: "ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef"). InnerVolumeSpecName "kube-api-access-p7ltr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:25:08 crc kubenswrapper[4881]: I0126 14:25:08.404621 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7ltr\" (UniqueName: \"kubernetes.io/projected/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef-kube-api-access-p7ltr\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:09 crc kubenswrapper[4881]: I0126 14:25:09.021831 4881 scope.go:117] "RemoveContainer" containerID="bc09879e2a0afe4d4fcd071884939d340e0a315a7f71d86c2090ef91e4ee3650" Jan 26 14:25:09 crc kubenswrapper[4881]: I0126 14:25:09.021884 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/crc-debug-fck86" Jan 26 14:25:10 crc kubenswrapper[4881]: I0126 14:25:10.096063 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef" path="/var/lib/kubelet/pods/ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef/volumes" Jan 26 14:25:54 crc kubenswrapper[4881]: I0126 14:25:54.410473 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b8956874b-lrmsj_a3ad04d4-bcff-4ed6-8648-be146e3ce20a/barbican-api/0.log" Jan 26 14:25:54 crc kubenswrapper[4881]: I0126 14:25:54.537460 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b8956874b-lrmsj_a3ad04d4-bcff-4ed6-8648-be146e3ce20a/barbican-api-log/0.log" Jan 26 14:25:54 crc kubenswrapper[4881]: I0126 14:25:54.643845 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b5f886498-f6c5n_2c982f75-f27c-4915-a75d-07f2fc53cf19/barbican-keystone-listener/0.log" Jan 26 14:25:54 crc kubenswrapper[4881]: I0126 14:25:54.727266 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b5f886498-f6c5n_2c982f75-f27c-4915-a75d-07f2fc53cf19/barbican-keystone-listener-log/0.log" Jan 26 14:25:54 crc kubenswrapper[4881]: I0126 14:25:54.838500 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-794958b545-pcbpb_275be453-0a36-451c-8a70-714b958c9625/barbican-worker/0.log" Jan 26 14:25:54 crc kubenswrapper[4881]: I0126 14:25:54.868340 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-794958b545-pcbpb_275be453-0a36-451c-8a70-714b958c9625/barbican-worker-log/0.log" Jan 26 14:25:55 crc kubenswrapper[4881]: I0126 14:25:55.026568 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fxdff_d817df95-5b02-462d-86b6-289f9decf3d3/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:25:55 crc kubenswrapper[4881]: I0126 14:25:55.153148 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3c188b0-972c-46b1-bb59-edce2c6b4f54/ceilometer-central-agent/0.log" Jan 26 14:25:55 crc kubenswrapper[4881]: I0126 14:25:55.199049 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3c188b0-972c-46b1-bb59-edce2c6b4f54/ceilometer-notification-agent/0.log" Jan 26 14:25:55 crc kubenswrapper[4881]: I0126 14:25:55.249150 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3c188b0-972c-46b1-bb59-edce2c6b4f54/proxy-httpd/0.log" Jan 26 14:25:55 crc kubenswrapper[4881]: I0126 14:25:55.285307 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3c188b0-972c-46b1-bb59-edce2c6b4f54/sg-core/0.log" Jan 26 14:25:55 crc kubenswrapper[4881]: I0126 14:25:55.465391 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1/cinder-api-log/0.log" Jan 26 14:25:55 crc kubenswrapper[4881]: I0126 14:25:55.822208 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4b9b969a-e1a3-4253-bffc-34fe8db0a2ff/probe/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.071672 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c0c9df05-a8bd-4e6a-8d42-3aa4b5a38bb1/cinder-api/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.099626 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4b9b969a-e1a3-4253-bffc-34fe8db0a2ff/cinder-backup/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.102781 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c/probe/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.113172 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5f3b1d8b-9062-4c02-90a4-f4a7e6c4c17c/cinder-scheduler/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.340041 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_7fe1aa7f-b1bd-4777-934a-76e8ba531b1b/probe/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.445873 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_7fe1aa7f-b1bd-4777-934a-76e8ba531b1b/cinder-volume/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.560084 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_77b81e80-c2b2-418e-b722-2f6ffa1b7103/probe/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.685622 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7qlfp_d828b00c-a7ee-47c8-b98a-2529ccf16cc6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.697633 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_77b81e80-c2b2-418e-b722-2f6ffa1b7103/cinder-volume/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.782101 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-t8v57_12cb0be5-3cea-4264-9b33-42194fe4991c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:25:56 crc kubenswrapper[4881]: I0126 14:25:56.887093 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66968b76ff-2mrpm_ca4b205e-5485-43e7-ab0c-b6cfae7c9a18/init/0.log" Jan 26 14:25:57 crc kubenswrapper[4881]: I0126 14:25:57.108377 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66968b76ff-2mrpm_ca4b205e-5485-43e7-ab0c-b6cfae7c9a18/init/0.log" Jan 26 14:25:57 crc kubenswrapper[4881]: I0126 14:25:57.217992 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-brjhw_b4d6e825-d231-4128-bd5d-3db56fbef5ec/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:25:57 crc kubenswrapper[4881]: I0126 14:25:57.257054 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66968b76ff-2mrpm_ca4b205e-5485-43e7-ab0c-b6cfae7c9a18/dnsmasq-dns/0.log" Jan 26 14:25:57 crc kubenswrapper[4881]: I0126 14:25:57.408415 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbfc472b-0aa5-4053-88cb-6efd65de5e79/glance-log/0.log" Jan 26 14:25:57 crc kubenswrapper[4881]: I0126 14:25:57.424322 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbfc472b-0aa5-4053-88cb-6efd65de5e79/glance-httpd/0.log" Jan 26 14:25:57 crc kubenswrapper[4881]: I0126 14:25:57.600090 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_82a0349b-c23f-4b1d-991e-f738d5c1ecee/glance-httpd/0.log" Jan 26 14:25:57 crc kubenswrapper[4881]: I0126 14:25:57.618966 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_82a0349b-c23f-4b1d-991e-f738d5c1ecee/glance-log/0.log" Jan 26 14:25:57 crc kubenswrapper[4881]: I0126 14:25:57.749411 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7bf7cc86f8-h94sx_92e11d67-ecbe-4005-849c-40a16f3d3faa/horizon/0.log" Jan 26 14:25:57 crc kubenswrapper[4881]: I0126 14:25:57.916123 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-42rrp_0f847b90-9682-4b21-8ccc-646996b89f4f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:25:58 crc kubenswrapper[4881]: I0126 14:25:58.136174 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lbcjj_ceaad5ff-3f46-431b-817c-669e0f038898/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:25:58 crc kubenswrapper[4881]: I0126 14:25:58.376038 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490541-c6h96_2c2380fd-0233-4942-8e8a-433cc3b15925/keystone-cron/0.log" Jan 26 14:25:58 crc kubenswrapper[4881]: I0126 14:25:58.593434 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490601-x4d85_ce705aca-2bb5-4314-aa70-a71cc77303d8/keystone-cron/0.log" Jan 26 14:25:58 crc kubenswrapper[4881]: I0126 14:25:58.647101 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a8e06d36-6fd3-40af-8066-f1cbb8d46a16/kube-state-metrics/0.log" Jan 26 14:25:58 crc kubenswrapper[4881]: I0126 14:25:58.715989 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7bf7cc86f8-h94sx_92e11d67-ecbe-4005-849c-40a16f3d3faa/horizon-log/0.log" Jan 26 14:25:58 crc kubenswrapper[4881]: I0126 14:25:58.816155 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-76cf66855-bgjld_bda630e6-c611-4029-9a8a-b347189d2fab/keystone-api/0.log" Jan 26 14:25:58 crc kubenswrapper[4881]: I0126 14:25:58.853369 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-9cbj8_2190fb1e-77a2-47d2-a0bb-2aaca7948653/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:25:59 crc kubenswrapper[4881]: I0126 14:25:59.349083 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-knqzl_41c88a4a-b833-417f-90c9-eb0edcf688ec/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:25:59 crc kubenswrapper[4881]: I0126 14:25:59.354786 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5555bb9565-2bdtt_666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9/neutron-httpd/0.log" Jan 26 14:25:59 crc kubenswrapper[4881]: I0126 14:25:59.463393 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5555bb9565-2bdtt_666ba5d0-7d46-4ae9-8265-9e6ac5bddbf9/neutron-api/0.log" Jan 26 14:26:00 crc kubenswrapper[4881]: I0126 14:26:00.008974 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_52e24417-da70-449b-847d-3a0f1516ac9f/nova-cell0-conductor-conductor/0.log" Jan 26 14:26:00 crc kubenswrapper[4881]: I0126 14:26:00.390131 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c039ccd0-c1d4-438f-ba49-d44103884a26/nova-cell1-conductor-conductor/0.log" Jan 26 14:26:00 crc kubenswrapper[4881]: I0126 14:26:00.762373 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_04587c7a-8d6d-4587-9ca5-52d8a9e57a38/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 14:26:00 crc kubenswrapper[4881]: I0126 14:26:00.855706 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-xpfht_7d45d6c2-3b8a-4ff9-9f43-f14011ee3f6f/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:26:01 crc kubenswrapper[4881]: I0126 14:26:01.058153 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ee44b824-a50b-4355-ab08-09d831323258/nova-api-log/0.log" Jan 26 14:26:01 crc kubenswrapper[4881]: I0126 14:26:01.151052 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae/nova-metadata-log/0.log" Jan 26 14:26:01 crc kubenswrapper[4881]: I0126 14:26:01.756026 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c8b6753b-d929-47d6-84ec-b72094efad83/mysql-bootstrap/0.log" Jan 26 14:26:01 crc kubenswrapper[4881]: I0126 14:26:01.815685 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_01bf74e7-115e-4392-93c4-f6c5c578c5dc/nova-scheduler-scheduler/0.log" Jan 26 14:26:01 crc kubenswrapper[4881]: I0126 14:26:01.863949 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ee44b824-a50b-4355-ab08-09d831323258/nova-api-api/0.log" Jan 26 14:26:01 crc kubenswrapper[4881]: I0126 14:26:01.980759 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c8b6753b-d929-47d6-84ec-b72094efad83/mysql-bootstrap/0.log" Jan 26 14:26:02 crc kubenswrapper[4881]: I0126 14:26:02.037961 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c8b6753b-d929-47d6-84ec-b72094efad83/galera/0.log" Jan 26 14:26:02 crc kubenswrapper[4881]: I0126 14:26:02.181082 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32ed51d8-b401-412f-925e-0cff27777e55/mysql-bootstrap/0.log" Jan 26 14:26:02 crc kubenswrapper[4881]: I0126 14:26:02.346928 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32ed51d8-b401-412f-925e-0cff27777e55/mysql-bootstrap/0.log" Jan 26 14:26:02 crc kubenswrapper[4881]: I0126 14:26:02.391359 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32ed51d8-b401-412f-925e-0cff27777e55/galera/0.log" Jan 26 14:26:02 crc kubenswrapper[4881]: I0126 14:26:02.542830 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6bb7934d-1b01-469b-9b72-c601eebbbf98/openstackclient/0.log" Jan 26 14:26:02 crc kubenswrapper[4881]: I0126 14:26:02.619909 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jh6zr_bbc286b3-4266-462b-b661-d072e9843683/openstack-network-exporter/0.log" Jan 26 14:26:02 crc kubenswrapper[4881]: I0126 14:26:02.816654 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrqqp_31e8c456-b53d-456e-a8d1-69f26e0602ad/ovsdb-server-init/0.log" Jan 26 14:26:02 crc kubenswrapper[4881]: I0126 14:26:02.989342 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrqqp_31e8c456-b53d-456e-a8d1-69f26e0602ad/ovsdb-server-init/0.log" Jan 26 14:26:03 crc kubenswrapper[4881]: I0126 14:26:03.061258 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrqqp_31e8c456-b53d-456e-a8d1-69f26e0602ad/ovsdb-server/0.log" Jan 26 14:26:03 crc kubenswrapper[4881]: I0126 14:26:03.312637 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vs5xn_e52cbcc1-521d-4a7d-98a6-50ab70a2f82f/ovn-controller/0.log" Jan 26 14:26:03 crc kubenswrapper[4881]: I0126 14:26:03.402121 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrqqp_31e8c456-b53d-456e-a8d1-69f26e0602ad/ovs-vswitchd/0.log" Jan 26 14:26:03 crc kubenswrapper[4881]: I0126 14:26:03.611429 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a10988d9-411e-42a7-82bf-8ed88569d801/openstack-network-exporter/0.log" Jan 26 14:26:03 crc kubenswrapper[4881]: I0126 14:26:03.612531 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ltf5b_b36ca725-d6f7-4551-84f6-e912cdc75a5f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:26:03 crc kubenswrapper[4881]: I0126 14:26:03.832732 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a10988d9-411e-42a7-82bf-8ed88569d801/ovn-northd/0.log" Jan 26 14:26:03 crc kubenswrapper[4881]: I0126 14:26:03.897057 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9aa3762a-f6f1-4f99-aa95-c22f2aaf51ae/nova-metadata-metadata/0.log" Jan 26 14:26:03 crc kubenswrapper[4881]: I0126 14:26:03.981930 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_406b36b2-d29f-4224-8bc5-9cfd6f057a48/openstack-network-exporter/0.log" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.073607 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_406b36b2-d29f-4224-8bc5-9cfd6f057a48/ovsdbserver-nb/0.log" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.360862 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7/openstack-network-exporter/0.log" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.407233 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f2nng"] Jan 26 14:26:04 crc kubenswrapper[4881]: E0126 14:26:04.407795 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef" containerName="container-00" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.407818 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef" containerName="container-00" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.408055 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8bf9aa-80c0-4f11-b534-0aaa2e3ba4ef" containerName="container-00" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.419502 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.423991 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f2nng"] Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.490908 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a7a612cf-2de0-4fc3-bd7e-9ddf16da6ca7/ovsdbserver-sb/0.log" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.531393 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-utilities\") pod \"redhat-operators-f2nng\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.531490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-catalog-content\") pod \"redhat-operators-f2nng\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.531557 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrn9c\" (UniqueName: \"kubernetes.io/projected/46613594-437b-4dd5-a2ea-4e7af46e8e06-kube-api-access-rrn9c\") pod \"redhat-operators-f2nng\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.632758 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-utilities\") pod \"redhat-operators-f2nng\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.632868 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-catalog-content\") pod \"redhat-operators-f2nng\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.632917 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrn9c\" (UniqueName: \"kubernetes.io/projected/46613594-437b-4dd5-a2ea-4e7af46e8e06-kube-api-access-rrn9c\") pod \"redhat-operators-f2nng\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.633587 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-utilities\") pod \"redhat-operators-f2nng\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.633789 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-catalog-content\") pod \"redhat-operators-f2nng\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.656208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrn9c\" (UniqueName: \"kubernetes.io/projected/46613594-437b-4dd5-a2ea-4e7af46e8e06-kube-api-access-rrn9c\") pod \"redhat-operators-f2nng\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.752575 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:04 crc kubenswrapper[4881]: I0126 14:26:04.970005 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_adce7384-2dc6-4e86-af0f-fb3b38627515/init-config-reloader/0.log" Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.038329 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-54766b76bb-mkjc2_34699df9-2dd8-4eee-9f19-e5af28cfa84d/placement-api/0.log" Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.096102 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-54766b76bb-mkjc2_34699df9-2dd8-4eee-9f19-e5af28cfa84d/placement-log/0.log" Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.232853 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_adce7384-2dc6-4e86-af0f-fb3b38627515/config-reloader/0.log" Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.255729 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_adce7384-2dc6-4e86-af0f-fb3b38627515/init-config-reloader/0.log" Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.340590 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f2nng"] Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.372414 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_adce7384-2dc6-4e86-af0f-fb3b38627515/prometheus/0.log" Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.440621 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_adce7384-2dc6-4e86-af0f-fb3b38627515/thanos-sidecar/0.log" Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.578741 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2a9efa3-8ac2-40ec-a543-b3a2013e8b39/setup-container/0.log" Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.600291 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2nng" event={"ID":"46613594-437b-4dd5-a2ea-4e7af46e8e06","Type":"ContainerStarted","Data":"c46b4a6c63057453b74105621f6ea84855ed03cb0aac0fc689754275d83a59fb"} Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.899853 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2a9efa3-8ac2-40ec-a543-b3a2013e8b39/setup-container/0.log" Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.924181 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_ab9a358b-8713-4790-a9c4-97b89efcc88f/setup-container/0.log" Jan 26 14:26:05 crc kubenswrapper[4881]: I0126 14:26:05.924284 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2a9efa3-8ac2-40ec-a543-b3a2013e8b39/rabbitmq/0.log" Jan 26 14:26:06 crc kubenswrapper[4881]: I0126 14:26:06.291695 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_ab9a358b-8713-4790-a9c4-97b89efcc88f/setup-container/0.log" Jan 26 14:26:06 crc kubenswrapper[4881]: I0126 14:26:06.433432 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_ab9a358b-8713-4790-a9c4-97b89efcc88f/rabbitmq/0.log" Jan 26 14:26:06 crc kubenswrapper[4881]: I0126 14:26:06.461435 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a258482a-e394-4833-9bef-1fc3abc0c6a7/setup-container/0.log" Jan 26 14:26:06 crc kubenswrapper[4881]: I0126 14:26:06.610752 4881 generic.go:334] "Generic (PLEG): container finished" podID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerID="dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da" exitCode=0 Jan 26 14:26:06 crc kubenswrapper[4881]: I0126 14:26:06.610798 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2nng" event={"ID":"46613594-437b-4dd5-a2ea-4e7af46e8e06","Type":"ContainerDied","Data":"dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da"} Jan 26 14:26:06 crc kubenswrapper[4881]: I0126 14:26:06.613959 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:26:06 crc kubenswrapper[4881]: I0126 14:26:06.700411 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a258482a-e394-4833-9bef-1fc3abc0c6a7/setup-container/0.log" Jan 26 14:26:06 crc kubenswrapper[4881]: I0126 14:26:06.724617 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7xzvw_760f5f9c-04ad-4788-886d-8f301f2f487b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:26:06 crc kubenswrapper[4881]: I0126 14:26:06.728908 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a258482a-e394-4833-9bef-1fc3abc0c6a7/rabbitmq/0.log" Jan 26 14:26:06 crc kubenswrapper[4881]: I0126 14:26:06.951019 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-gjsdk_07589eac-a07d-4781-a341-cbe2e35872a4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:26:07 crc kubenswrapper[4881]: I0126 14:26:07.310078 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-l8p22_3aaf2506-3e1b-4edd-af45-c98419359e59/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:26:07 crc kubenswrapper[4881]: I0126 14:26:07.386671 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-54xff_81f1ca9f-e8b5-477d-b628-8a60848a7fe2/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:26:07 crc kubenswrapper[4881]: I0126 14:26:07.622560 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2nng" event={"ID":"46613594-437b-4dd5-a2ea-4e7af46e8e06","Type":"ContainerStarted","Data":"3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76"} Jan 26 14:26:07 crc kubenswrapper[4881]: I0126 14:26:07.629271 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-n5zvt_39dc7ed5-dafd-4ef4-94a7-509fc1568f5a/ssh-known-hosts-edpm-deployment/0.log" Jan 26 14:26:07 crc kubenswrapper[4881]: I0126 14:26:07.835543 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55f986558f-qqwqs_4b3ea251-a4e4-4e4d-a21f-a239f80690e1/proxy-server/0.log" Jan 26 14:26:07 crc kubenswrapper[4881]: I0126 14:26:07.903551 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dmkh9_14ccca3b-65a8-4df1-9905-b21bfb24e5be/swift-ring-rebalance/0.log" Jan 26 14:26:07 crc kubenswrapper[4881]: I0126 14:26:07.954198 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55f986558f-qqwqs_4b3ea251-a4e4-4e4d-a21f-a239f80690e1/proxy-httpd/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.173245 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/account-reaper/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.196350 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/account-auditor/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.260555 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/account-replicator/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.335202 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/account-server/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.547769 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/container-server/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.553670 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/container-auditor/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.581657 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/container-replicator/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.642410 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/container-updater/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.772257 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/object-auditor/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.828622 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/object-expirer/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.900021 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/object-replicator/0.log" Jan 26 14:26:08 crc kubenswrapper[4881]: I0126 14:26:08.911790 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/object-server/0.log" Jan 26 14:26:09 crc kubenswrapper[4881]: I0126 14:26:09.116876 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/object-updater/0.log" Jan 26 14:26:09 crc kubenswrapper[4881]: I0126 14:26:09.152749 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/swift-recon-cron/0.log" Jan 26 14:26:09 crc kubenswrapper[4881]: I0126 14:26:09.179946 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e234f178-5499-441d-923d-26a5a7cbfe04/rsync/0.log" Jan 26 14:26:09 crc kubenswrapper[4881]: I0126 14:26:09.436008 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_fb8ddd97-c952-48e2-b3df-f594646b4377/tempest-tests-tempest-tests-runner/0.log" Jan 26 14:26:09 crc kubenswrapper[4881]: I0126 14:26:09.448337 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-lfk9w_1062b2c8-e4fb-4999-aecf-a04dd5157826/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:26:09 crc kubenswrapper[4881]: I0126 14:26:09.674531 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_21874105-2abf-4ab1-98a6-709151462d2a/test-operator-logs-container/0.log" Jan 26 14:26:09 crc kubenswrapper[4881]: I0126 14:26:09.694419 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-m2nvh_f8d3d257-2cd9-42b9-aeb9-462b635c53dc/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:26:10 crc kubenswrapper[4881]: I0126 14:26:10.745363 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_1b363415-cc56-4505-91f6-f9700b378625/watcher-applier/0.log" Jan 26 14:26:11 crc kubenswrapper[4881]: I0126 14:26:11.372852 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_bacf9a45-b73a-41bd-9c12-eb112ddcfaf2/watcher-api-log/0.log" Jan 26 14:26:12 crc kubenswrapper[4881]: I0126 14:26:12.665151 4881 generic.go:334] "Generic (PLEG): container finished" podID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerID="3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76" exitCode=0 Jan 26 14:26:12 crc kubenswrapper[4881]: I0126 14:26:12.665224 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2nng" event={"ID":"46613594-437b-4dd5-a2ea-4e7af46e8e06","Type":"ContainerDied","Data":"3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76"} Jan 26 14:26:13 crc kubenswrapper[4881]: I0126 14:26:13.686305 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2nng" event={"ID":"46613594-437b-4dd5-a2ea-4e7af46e8e06","Type":"ContainerStarted","Data":"28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef"} Jan 26 14:26:13 crc kubenswrapper[4881]: I0126 14:26:13.721288 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f2nng" podStartSLOduration=3.016718602 podStartE2EDuration="9.721266349s" podCreationTimestamp="2026-01-26 14:26:04 +0000 UTC" firstStartedPulling="2026-01-26 14:26:06.613684101 +0000 UTC m=+6639.092994127" lastFinishedPulling="2026-01-26 14:26:13.318231848 +0000 UTC m=+6645.797541874" observedRunningTime="2026-01-26 14:26:13.709440922 +0000 UTC m=+6646.188750968" watchObservedRunningTime="2026-01-26 14:26:13.721266349 +0000 UTC m=+6646.200576375" Jan 26 14:26:14 crc kubenswrapper[4881]: I0126 14:26:14.137700 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_a6421b4d-2505-47c4-899d-7f7bd2113cf8/memcached/0.log" Jan 26 14:26:14 crc kubenswrapper[4881]: I0126 14:26:14.468213 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_e321419e-1316-442e-b8f1-2f4a2451203f/watcher-decision-engine/0.log" Jan 26 14:26:14 crc kubenswrapper[4881]: I0126 14:26:14.753542 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:14 crc kubenswrapper[4881]: I0126 14:26:14.753629 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:15 crc kubenswrapper[4881]: I0126 14:26:15.533353 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_bacf9a45-b73a-41bd-9c12-eb112ddcfaf2/watcher-api/0.log" Jan 26 14:26:15 crc kubenswrapper[4881]: I0126 14:26:15.820342 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f2nng" podUID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerName="registry-server" probeResult="failure" output=< Jan 26 14:26:15 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 26 14:26:15 crc kubenswrapper[4881]: > Jan 26 14:26:24 crc kubenswrapper[4881]: I0126 14:26:24.803845 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:24 crc kubenswrapper[4881]: I0126 14:26:24.857436 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:25 crc kubenswrapper[4881]: I0126 14:26:25.042436 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f2nng"] Jan 26 14:26:26 crc kubenswrapper[4881]: I0126 14:26:26.823699 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f2nng" podUID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerName="registry-server" containerID="cri-o://28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef" gracePeriod=2 Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.318576 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.462464 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrn9c\" (UniqueName: \"kubernetes.io/projected/46613594-437b-4dd5-a2ea-4e7af46e8e06-kube-api-access-rrn9c\") pod \"46613594-437b-4dd5-a2ea-4e7af46e8e06\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.462763 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-utilities\") pod \"46613594-437b-4dd5-a2ea-4e7af46e8e06\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.462803 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-catalog-content\") pod \"46613594-437b-4dd5-a2ea-4e7af46e8e06\" (UID: \"46613594-437b-4dd5-a2ea-4e7af46e8e06\") " Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.463386 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-utilities" (OuterVolumeSpecName: "utilities") pod "46613594-437b-4dd5-a2ea-4e7af46e8e06" (UID: "46613594-437b-4dd5-a2ea-4e7af46e8e06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.463556 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.471924 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46613594-437b-4dd5-a2ea-4e7af46e8e06-kube-api-access-rrn9c" (OuterVolumeSpecName: "kube-api-access-rrn9c") pod "46613594-437b-4dd5-a2ea-4e7af46e8e06" (UID: "46613594-437b-4dd5-a2ea-4e7af46e8e06"). InnerVolumeSpecName "kube-api-access-rrn9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.564910 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrn9c\" (UniqueName: \"kubernetes.io/projected/46613594-437b-4dd5-a2ea-4e7af46e8e06-kube-api-access-rrn9c\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.572212 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46613594-437b-4dd5-a2ea-4e7af46e8e06" (UID: "46613594-437b-4dd5-a2ea-4e7af46e8e06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.666639 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46613594-437b-4dd5-a2ea-4e7af46e8e06-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.833835 4881 generic.go:334] "Generic (PLEG): container finished" podID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerID="28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef" exitCode=0 Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.833896 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2nng" event={"ID":"46613594-437b-4dd5-a2ea-4e7af46e8e06","Type":"ContainerDied","Data":"28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef"} Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.834205 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2nng" event={"ID":"46613594-437b-4dd5-a2ea-4e7af46e8e06","Type":"ContainerDied","Data":"c46b4a6c63057453b74105621f6ea84855ed03cb0aac0fc689754275d83a59fb"} Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.834237 4881 scope.go:117] "RemoveContainer" containerID="28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.833911 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2nng" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.853371 4881 scope.go:117] "RemoveContainer" containerID="3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.869250 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f2nng"] Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.895479 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f2nng"] Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.907125 4881 scope.go:117] "RemoveContainer" containerID="dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.955397 4881 scope.go:117] "RemoveContainer" containerID="28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef" Jan 26 14:26:27 crc kubenswrapper[4881]: E0126 14:26:27.963021 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef\": container with ID starting with 28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef not found: ID does not exist" containerID="28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.963065 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef"} err="failed to get container status \"28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef\": rpc error: code = NotFound desc = could not find container \"28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef\": container with ID starting with 28b6b43e792c0f67848d76b9085cf8068ab4208cb40346bd4ab826499cae7cef not found: ID does not exist" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.963089 4881 scope.go:117] "RemoveContainer" containerID="3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76" Jan 26 14:26:27 crc kubenswrapper[4881]: E0126 14:26:27.963496 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76\": container with ID starting with 3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76 not found: ID does not exist" containerID="3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.963536 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76"} err="failed to get container status \"3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76\": rpc error: code = NotFound desc = could not find container \"3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76\": container with ID starting with 3d43a7f38d691473ae65b7504cfc80c6375fb44e25725af3c81afc3f2c0f4c76 not found: ID does not exist" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.963551 4881 scope.go:117] "RemoveContainer" containerID="dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da" Jan 26 14:26:27 crc kubenswrapper[4881]: E0126 14:26:27.963744 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da\": container with ID starting with dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da not found: ID does not exist" containerID="dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da" Jan 26 14:26:27 crc kubenswrapper[4881]: I0126 14:26:27.963770 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da"} err="failed to get container status \"dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da\": rpc error: code = NotFound desc = could not find container \"dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da\": container with ID starting with dc568e831bbaedcf234c89b1265875ebb4c862640e1520580f478f4d2d5996da not found: ID does not exist" Jan 26 14:26:28 crc kubenswrapper[4881]: I0126 14:26:28.093708 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46613594-437b-4dd5-a2ea-4e7af46e8e06" path="/var/lib/kubelet/pods/46613594-437b-4dd5-a2ea-4e7af46e8e06/volumes" Jan 26 14:26:41 crc kubenswrapper[4881]: I0126 14:26:41.215415 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/util/0.log" Jan 26 14:26:41 crc kubenswrapper[4881]: I0126 14:26:41.645319 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/util/0.log" Jan 26 14:26:41 crc kubenswrapper[4881]: I0126 14:26:41.667455 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/pull/0.log" Jan 26 14:26:41 crc kubenswrapper[4881]: I0126 14:26:41.687895 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/pull/0.log" Jan 26 14:26:41 crc kubenswrapper[4881]: I0126 14:26:41.873206 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/pull/0.log" Jan 26 14:26:41 crc kubenswrapper[4881]: I0126 14:26:41.884430 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/extract/0.log" Jan 26 14:26:41 crc kubenswrapper[4881]: I0126 14:26:41.909780 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ad70b87966c87781d6eb9196beaf5dd7adef9a3f0a61b9f42ea9b97c18rzk2b_ad3804a0-2b66-4f69-a8f4-6f8b27abea8f/util/0.log" Jan 26 14:26:42 crc kubenswrapper[4881]: I0126 14:26:42.129374 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-w2n48_d5375dff-af5c-4de8-b52b-acf18edc4fb2/manager/0.log" Jan 26 14:26:42 crc kubenswrapper[4881]: I0126 14:26:42.196398 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-hs8xc_e8b8ff3a-c099-4192-b061-33ff69fd2884/manager/0.log" Jan 26 14:26:42 crc kubenswrapper[4881]: I0126 14:26:42.379924 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-lgljg_c5cecd8b-813f-4bde-be28-371c54bcdfb9/manager/0.log" Jan 26 14:26:42 crc kubenswrapper[4881]: I0126 14:26:42.456778 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-sgnd2_4508aa9d-2a89-4976-bd36-dc918900371e/manager/0.log" Jan 26 14:26:42 crc kubenswrapper[4881]: I0126 14:26:42.766778 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-v95fq_0e17a034-e3c9-434a-838f-8bfae6d010dd/manager/0.log" Jan 26 14:26:42 crc kubenswrapper[4881]: I0126 14:26:42.828017 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-rww6v_d998c88b-6b01-4e5f-bbab-a5aaee1a945b/manager/0.log" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.061585 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-flv4v_cbafad55-0cc5-42d6-b721-b1f4e158251f/manager/0.log" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.113600 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-495pj"] Jan 26 14:26:43 crc kubenswrapper[4881]: E0126 14:26:43.114048 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerName="extract-utilities" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.114065 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerName="extract-utilities" Jan 26 14:26:43 crc kubenswrapper[4881]: E0126 14:26:43.114091 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerName="extract-content" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.114099 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerName="extract-content" Jan 26 14:26:43 crc kubenswrapper[4881]: E0126 14:26:43.114113 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerName="registry-server" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.114119 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerName="registry-server" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.114318 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="46613594-437b-4dd5-a2ea-4e7af46e8e06" containerName="registry-server" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.115793 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.135121 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-495pj"] Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.218138 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp84v\" (UniqueName: \"kubernetes.io/projected/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-kube-api-access-dp84v\") pod \"redhat-marketplace-495pj\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.218205 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-catalog-content\") pod \"redhat-marketplace-495pj\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.218229 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-utilities\") pod \"redhat-marketplace-495pj\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.284600 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-wkhcm_517d3e74-cfe4-4e5e-96b0-0780042b0dbd/manager/0.log" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.320020 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp84v\" (UniqueName: \"kubernetes.io/projected/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-kube-api-access-dp84v\") pod \"redhat-marketplace-495pj\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.320079 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-catalog-content\") pod \"redhat-marketplace-495pj\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.320107 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-utilities\") pod \"redhat-marketplace-495pj\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.320563 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-utilities\") pod \"redhat-marketplace-495pj\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.320654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-catalog-content\") pod \"redhat-marketplace-495pj\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.343447 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp84v\" (UniqueName: \"kubernetes.io/projected/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-kube-api-access-dp84v\") pod \"redhat-marketplace-495pj\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.432276 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.435941 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-66d48_b6807e2b-25b9-4802-8086-2c6eab9ff308/manager/0.log" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.455935 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-sc6f8_78a91159-fead-4133-98e4-5dd587f6b274/manager/0.log" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.741235 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-zkhss_97b268cc-1863-494c-a47b-da0c52f76d39/manager/0.log" Jan 26 14:26:43 crc kubenswrapper[4881]: I0126 14:26:43.927981 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-m8vjc_76b071ae-05bc-4142-9004-e5528d00c5cc/manager/0.log" Jan 26 14:26:44 crc kubenswrapper[4881]: I0126 14:26:44.037796 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-r7dwj_d808c58e-a8df-4cbd-aee6-d87edd677e94/manager/0.log" Jan 26 14:26:44 crc kubenswrapper[4881]: I0126 14:26:44.044605 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-495pj"] Jan 26 14:26:44 crc kubenswrapper[4881]: I0126 14:26:44.178925 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-r2p67_3451b01c-ed54-49be-ab3a-d8150976d2ec/manager/0.log" Jan 26 14:26:44 crc kubenswrapper[4881]: I0126 14:26:44.262728 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854jj499_8cc0e35b-757a-46fc-bc17-f586426c9b82/manager/0.log" Jan 26 14:26:44 crc kubenswrapper[4881]: I0126 14:26:44.529952 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6fbc4c9d5c-k7d5p_ba9ac6c1-1e58-4306-b8fd-c56dca9f4ec4/operator/0.log" Jan 26 14:26:44 crc kubenswrapper[4881]: I0126 14:26:44.739471 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wmbzq_51923b46-00ba-4a5e-984d-b1f8febec058/registry-server/0.log" Jan 26 14:26:44 crc kubenswrapper[4881]: I0126 14:26:44.872105 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-pz264_5b1abb90-faa0-4b72-9d20-f84ddf952245/manager/0.log" Jan 26 14:26:45 crc kubenswrapper[4881]: I0126 14:26:45.006739 4881 generic.go:334] "Generic (PLEG): container finished" podID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerID="f7f5e126d13e1d6ba2b2b3eac49a03c5411182829ea3293c12d987dafc7c30a8" exitCode=0 Jan 26 14:26:45 crc kubenswrapper[4881]: I0126 14:26:45.006809 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-495pj" event={"ID":"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc","Type":"ContainerDied","Data":"f7f5e126d13e1d6ba2b2b3eac49a03c5411182829ea3293c12d987dafc7c30a8"} Jan 26 14:26:45 crc kubenswrapper[4881]: I0126 14:26:45.006850 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-495pj" event={"ID":"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc","Type":"ContainerStarted","Data":"32da412d912ba07d9d761fdbfd667a2690878980f4e9ab7cd8a095a8f0ba8bfb"} Jan 26 14:26:45 crc kubenswrapper[4881]: I0126 14:26:45.046563 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-6wtsp_de6b2c73-a5db-4333-91e1-7722f0ba1127/manager/0.log" Jan 26 14:26:45 crc kubenswrapper[4881]: I0126 14:26:45.239258 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vgjn4_a5f220e0-8c4f-4915-b0d0-cb85cc7f7850/operator/0.log" Jan 26 14:26:45 crc kubenswrapper[4881]: I0126 14:26:45.474604 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-zxw9s_0a7aea9c-0f85-45d1-9c90-e06acb42f500/manager/0.log" Jan 26 14:26:45 crc kubenswrapper[4881]: I0126 14:26:45.703567 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-649ccf9654-zlvc6_74d53f54-a284-45f0-ae81-5c25d2c5cbe1/manager/0.log" Jan 26 14:26:45 crc kubenswrapper[4881]: I0126 14:26:45.754830 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-sqqcs_973ffd61-1f3c-4e2f-9315-dae216499f96/manager/0.log" Jan 26 14:26:45 crc kubenswrapper[4881]: I0126 14:26:45.755705 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-ff2c4_0591b1a9-0d5f-4f0a-beca-9ed62627012e/manager/0.log" Jan 26 14:26:45 crc kubenswrapper[4881]: I0126 14:26:45.928159 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5784f86c76-zbvz9_ab3681e4-6e5f-4f8d-909d-8d7801366f54/manager/0.log" Jan 26 14:26:46 crc kubenswrapper[4881]: I0126 14:26:46.018857 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-495pj" event={"ID":"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc","Type":"ContainerStarted","Data":"f3def26d49d8bef5ad27322a74a192e220c369f9b8a833d59feabd5f2da87b7d"} Jan 26 14:26:47 crc kubenswrapper[4881]: I0126 14:26:47.028708 4881 generic.go:334] "Generic (PLEG): container finished" podID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerID="f3def26d49d8bef5ad27322a74a192e220c369f9b8a833d59feabd5f2da87b7d" exitCode=0 Jan 26 14:26:47 crc kubenswrapper[4881]: I0126 14:26:47.028804 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-495pj" event={"ID":"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc","Type":"ContainerDied","Data":"f3def26d49d8bef5ad27322a74a192e220c369f9b8a833d59feabd5f2da87b7d"} Jan 26 14:26:49 crc kubenswrapper[4881]: I0126 14:26:49.048243 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-495pj" event={"ID":"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc","Type":"ContainerStarted","Data":"5db0368e090e7cd83da6e8db3dfbca95586c645952a5fd9d3ff192cf33102ea3"} Jan 26 14:26:49 crc kubenswrapper[4881]: I0126 14:26:49.073142 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-495pj" podStartSLOduration=3.252004666 podStartE2EDuration="6.07312103s" podCreationTimestamp="2026-01-26 14:26:43 +0000 UTC" firstStartedPulling="2026-01-26 14:26:45.011629105 +0000 UTC m=+6677.490939131" lastFinishedPulling="2026-01-26 14:26:47.832745469 +0000 UTC m=+6680.312055495" observedRunningTime="2026-01-26 14:26:49.069297707 +0000 UTC m=+6681.548607753" watchObservedRunningTime="2026-01-26 14:26:49.07312103 +0000 UTC m=+6681.552431056" Jan 26 14:26:53 crc kubenswrapper[4881]: I0126 14:26:53.432902 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:53 crc kubenswrapper[4881]: I0126 14:26:53.433427 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:53 crc kubenswrapper[4881]: I0126 14:26:53.483410 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:54 crc kubenswrapper[4881]: I0126 14:26:54.146823 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:54 crc kubenswrapper[4881]: I0126 14:26:54.206739 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-495pj"] Jan 26 14:26:54 crc kubenswrapper[4881]: I0126 14:26:54.789721 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:26:54 crc kubenswrapper[4881]: I0126 14:26:54.790050 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:26:56 crc kubenswrapper[4881]: I0126 14:26:56.110396 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-495pj" podUID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerName="registry-server" containerID="cri-o://5db0368e090e7cd83da6e8db3dfbca95586c645952a5fd9d3ff192cf33102ea3" gracePeriod=2 Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.121087 4881 generic.go:334] "Generic (PLEG): container finished" podID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerID="5db0368e090e7cd83da6e8db3dfbca95586c645952a5fd9d3ff192cf33102ea3" exitCode=0 Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.121198 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-495pj" event={"ID":"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc","Type":"ContainerDied","Data":"5db0368e090e7cd83da6e8db3dfbca95586c645952a5fd9d3ff192cf33102ea3"} Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.121846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-495pj" event={"ID":"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc","Type":"ContainerDied","Data":"32da412d912ba07d9d761fdbfd667a2690878980f4e9ab7cd8a095a8f0ba8bfb"} Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.121866 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32da412d912ba07d9d761fdbfd667a2690878980f4e9ab7cd8a095a8f0ba8bfb" Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.157639 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.263236 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-catalog-content\") pod \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.263588 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-utilities\") pod \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.263651 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp84v\" (UniqueName: \"kubernetes.io/projected/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-kube-api-access-dp84v\") pod \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\" (UID: \"150dbb8b-d22d-4b99-99de-8eb6e25e7ecc\") " Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.264834 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-utilities" (OuterVolumeSpecName: "utilities") pod "150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" (UID: "150dbb8b-d22d-4b99-99de-8eb6e25e7ecc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.285499 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-kube-api-access-dp84v" (OuterVolumeSpecName: "kube-api-access-dp84v") pod "150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" (UID: "150dbb8b-d22d-4b99-99de-8eb6e25e7ecc"). InnerVolumeSpecName "kube-api-access-dp84v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.297699 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" (UID: "150dbb8b-d22d-4b99-99de-8eb6e25e7ecc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.367314 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.367361 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp84v\" (UniqueName: \"kubernetes.io/projected/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-kube-api-access-dp84v\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:57 crc kubenswrapper[4881]: I0126 14:26:57.367383 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:58 crc kubenswrapper[4881]: I0126 14:26:58.131409 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-495pj" Jan 26 14:26:58 crc kubenswrapper[4881]: I0126 14:26:58.189397 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-495pj"] Jan 26 14:26:58 crc kubenswrapper[4881]: I0126 14:26:58.205876 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-495pj"] Jan 26 14:27:00 crc kubenswrapper[4881]: I0126 14:27:00.093549 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" path="/var/lib/kubelet/pods/150dbb8b-d22d-4b99-99de-8eb6e25e7ecc/volumes" Jan 26 14:27:07 crc kubenswrapper[4881]: I0126 14:27:07.193936 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-p7bh2_d6b7645c-9920-4793-b6aa-9a6664cc93a0/control-plane-machine-set-operator/0.log" Jan 26 14:27:07 crc kubenswrapper[4881]: I0126 14:27:07.416455 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2n5pc_1c3ab1d3-b6c8-46c7-8721-c8671d38ae03/kube-rbac-proxy/0.log" Jan 26 14:27:07 crc kubenswrapper[4881]: I0126 14:27:07.431909 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2n5pc_1c3ab1d3-b6c8-46c7-8721-c8671d38ae03/machine-api-operator/0.log" Jan 26 14:27:21 crc kubenswrapper[4881]: I0126 14:27:21.490203 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-hvlrt_e0a1688c-21a5-4443-9254-78b5b189c9fa/cert-manager-controller/0.log" Jan 26 14:27:21 crc kubenswrapper[4881]: I0126 14:27:21.718728 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-gmh7q_28bb7687-5041-4924-a064-a13442fc3766/cert-manager-cainjector/0.log" Jan 26 14:27:21 crc kubenswrapper[4881]: I0126 14:27:21.763036 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5bxtm_4bfa393b-f144-4c15-81f7-b2c176f31b61/cert-manager-webhook/0.log" Jan 26 14:27:24 crc kubenswrapper[4881]: I0126 14:27:24.789409 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:27:24 crc kubenswrapper[4881]: I0126 14:27:24.790092 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:27:35 crc kubenswrapper[4881]: I0126 14:27:35.937225 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-s8cmw_04037f03-d731-4b56-931b-6883929dc843/nmstate-console-plugin/0.log" Jan 26 14:27:36 crc kubenswrapper[4881]: I0126 14:27:36.143101 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hv5dq_39762078-aa2c-44ae-8ed5-4ac22ebd62be/nmstate-handler/0.log" Jan 26 14:27:36 crc kubenswrapper[4881]: I0126 14:27:36.169050 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4gxlq_2a843206-a177-4422-be4f-bf5ccbdef9f1/kube-rbac-proxy/0.log" Jan 26 14:27:36 crc kubenswrapper[4881]: I0126 14:27:36.237857 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4gxlq_2a843206-a177-4422-be4f-bf5ccbdef9f1/nmstate-metrics/0.log" Jan 26 14:27:36 crc kubenswrapper[4881]: I0126 14:27:36.321455 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-7bpbf_84cb8155-415d-4537-872c-bf03652861e0/nmstate-operator/0.log" Jan 26 14:27:36 crc kubenswrapper[4881]: I0126 14:27:36.429916 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-mp8sz_1869aca8-7499-4174-9154-588bbc7d5c24/nmstate-webhook/0.log" Jan 26 14:27:50 crc kubenswrapper[4881]: I0126 14:27:50.698923 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-ss7tt_134b4f14-ab8f-4d19-9c5d-90f2642a285e/prometheus-operator/0.log" Jan 26 14:27:50 crc kubenswrapper[4881]: I0126 14:27:50.848186 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq_e46bce59-2bf0-4e4f-9988-351d4f1f6bc2/prometheus-operator-admission-webhook/0.log" Jan 26 14:27:50 crc kubenswrapper[4881]: I0126 14:27:50.897676 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq_0dff4c41-61ac-4189-a67e-18689e873d2a/prometheus-operator-admission-webhook/0.log" Jan 26 14:27:51 crc kubenswrapper[4881]: I0126 14:27:51.033453 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tf2kf_b636a0bf-808d-4fce-9675-621381943903/operator/0.log" Jan 26 14:27:51 crc kubenswrapper[4881]: I0126 14:27:51.123773 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-hprwm_b23a30cc-d92b-4491-963d-9f93d3b48547/perses-operator/0.log" Jan 26 14:27:54 crc kubenswrapper[4881]: I0126 14:27:54.788890 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fwlbz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:27:54 crc kubenswrapper[4881]: I0126 14:27:54.789330 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:27:54 crc kubenswrapper[4881]: I0126 14:27:54.789371 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" Jan 26 14:27:54 crc kubenswrapper[4881]: I0126 14:27:54.790265 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062"} pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:27:54 crc kubenswrapper[4881]: I0126 14:27:54.790323 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerName="machine-config-daemon" containerID="cri-o://3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" gracePeriod=600 Jan 26 14:27:55 crc kubenswrapper[4881]: E0126 14:27:55.354466 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:27:55 crc kubenswrapper[4881]: I0126 14:27:55.702480 4881 generic.go:334] "Generic (PLEG): container finished" podID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" exitCode=0 Jan 26 14:27:55 crc kubenswrapper[4881]: I0126 14:27:55.702565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerDied","Data":"3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062"} Jan 26 14:27:55 crc kubenswrapper[4881]: I0126 14:27:55.703123 4881 scope.go:117] "RemoveContainer" containerID="ae9424ee3a3f44134ddf9f7f1716ea85c1086060f07bf5296ffa39b6632fe2d5" Jan 26 14:27:55 crc kubenswrapper[4881]: I0126 14:27:55.703887 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:27:55 crc kubenswrapper[4881]: E0126 14:27:55.704243 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.250980 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8vg98_901f2a44-aecd-4a72-8802-b24d3bb902af/kube-rbac-proxy/0.log" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.382354 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8vg98_901f2a44-aecd-4a72-8802-b24d3bb902af/controller/0.log" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.465399 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-frr-files/0.log" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.690188 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-frr-files/0.log" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.705768 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-reloader/0.log" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.726828 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-metrics/0.log" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.730315 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-reloader/0.log" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.887719 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-frr-files/0.log" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.916137 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-reloader/0.log" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.965703 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-metrics/0.log" Jan 26 14:28:05 crc kubenswrapper[4881]: I0126 14:28:05.980689 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-metrics/0.log" Jan 26 14:28:06 crc kubenswrapper[4881]: I0126 14:28:06.150222 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-reloader/0.log" Jan 26 14:28:06 crc kubenswrapper[4881]: I0126 14:28:06.175117 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/controller/0.log" Jan 26 14:28:06 crc kubenswrapper[4881]: I0126 14:28:06.177645 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-metrics/0.log" Jan 26 14:28:06 crc kubenswrapper[4881]: I0126 14:28:06.187535 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/cp-frr-files/0.log" Jan 26 14:28:06 crc kubenswrapper[4881]: I0126 14:28:06.343793 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/frr-metrics/0.log" Jan 26 14:28:06 crc kubenswrapper[4881]: I0126 14:28:06.364247 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/kube-rbac-proxy/0.log" Jan 26 14:28:06 crc kubenswrapper[4881]: I0126 14:28:06.376763 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/kube-rbac-proxy-frr/0.log" Jan 26 14:28:06 crc kubenswrapper[4881]: I0126 14:28:06.556564 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/reloader/0.log" Jan 26 14:28:06 crc kubenswrapper[4881]: I0126 14:28:06.597528 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-gmnh8_03597099-e9a6-4f59-9f54-700638dcf570/frr-k8s-webhook-server/0.log" Jan 26 14:28:06 crc kubenswrapper[4881]: I0126 14:28:06.824749 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-79649b4ffb-kpsrh_2483eb0f-5e2f-4df8-8385-4095077aa351/manager/0.log" Jan 26 14:28:07 crc kubenswrapper[4881]: I0126 14:28:07.103876 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-64dc64df49-qlh66_1a51e914-e793-4f03-b58a-65628089e71a/webhook-server/0.log" Jan 26 14:28:07 crc kubenswrapper[4881]: I0126 14:28:07.166721 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kg9bx_0323a529-06f7-4ee1-ac63-e9226b67ae3a/kube-rbac-proxy/0.log" Jan 26 14:28:07 crc kubenswrapper[4881]: I0126 14:28:07.890293 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kg9bx_0323a529-06f7-4ee1-ac63-e9226b67ae3a/speaker/0.log" Jan 26 14:28:08 crc kubenswrapper[4881]: I0126 14:28:08.309967 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8rjxh_769ed86e-fb54-4e5a-a315-2cc85e6b0f3e/frr/0.log" Jan 26 14:28:10 crc kubenswrapper[4881]: I0126 14:28:10.083461 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:28:10 crc kubenswrapper[4881]: E0126 14:28:10.084582 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:28:21 crc kubenswrapper[4881]: I0126 14:28:21.626287 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/util/0.log" Jan 26 14:28:21 crc kubenswrapper[4881]: I0126 14:28:21.811319 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/util/0.log" Jan 26 14:28:21 crc kubenswrapper[4881]: I0126 14:28:21.821373 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/pull/0.log" Jan 26 14:28:21 crc kubenswrapper[4881]: I0126 14:28:21.911571 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/pull/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.082250 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:28:22 crc kubenswrapper[4881]: E0126 14:28:22.082694 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.122678 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/util/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.129123 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/extract/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.324211 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/util/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.328117 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvhv5t_d1afd28b-d9f1-4ee3-aa69-85a1c759161a/pull/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.500036 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/pull/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.532167 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/util/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.579205 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/pull/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.674735 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/util/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.774648 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/extract/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.776464 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135mlgx_2478a5bc-036d-4f8a-bbf2-171ad7dc5e3f/pull/0.log" Jan 26 14:28:22 crc kubenswrapper[4881]: I0126 14:28:22.873873 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/util/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.027826 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/util/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.040342 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/pull/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.056058 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/pull/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.424840 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/extract/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.429505 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/pull/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.486773 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hgnp_5ccdbf22-7f0b-489c-bf4c-22ce230c429a/util/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.624580 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-utilities/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.759201 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-utilities/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.809599 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-content/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.817367 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-content/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.955819 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-content/0.log" Jan 26 14:28:23 crc kubenswrapper[4881]: I0126 14:28:23.993083 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/extract-utilities/0.log" Jan 26 14:28:24 crc kubenswrapper[4881]: I0126 14:28:24.199642 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-utilities/0.log" Jan 26 14:28:24 crc kubenswrapper[4881]: I0126 14:28:24.375512 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-utilities/0.log" Jan 26 14:28:24 crc kubenswrapper[4881]: I0126 14:28:24.404914 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-content/0.log" Jan 26 14:28:24 crc kubenswrapper[4881]: I0126 14:28:24.434264 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-content/0.log" Jan 26 14:28:24 crc kubenswrapper[4881]: I0126 14:28:24.586620 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-utilities/0.log" Jan 26 14:28:24 crc kubenswrapper[4881]: I0126 14:28:24.689541 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/extract-content/0.log" Jan 26 14:28:24 crc kubenswrapper[4881]: I0126 14:28:24.817895 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j57t5_b7492c80-8cb7-4b48-95c7-ecec74b07dc3/registry-server/0.log" Jan 26 14:28:24 crc kubenswrapper[4881]: I0126 14:28:24.948507 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ghn75_c3ce8c88-e7f5-461d-ad61-e035c0ca7631/marketplace-operator/0.log" Jan 26 14:28:25 crc kubenswrapper[4881]: I0126 14:28:25.134606 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-utilities/0.log" Jan 26 14:28:25 crc kubenswrapper[4881]: I0126 14:28:25.398177 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-content/0.log" Jan 26 14:28:25 crc kubenswrapper[4881]: I0126 14:28:25.402184 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-utilities/0.log" Jan 26 14:28:25 crc kubenswrapper[4881]: I0126 14:28:25.451297 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-content/0.log" Jan 26 14:28:25 crc kubenswrapper[4881]: I0126 14:28:25.500647 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-892tg_a5b1bc2d-d349-4de7-bcf8-52fc979a3ac2/registry-server/0.log" Jan 26 14:28:25 crc kubenswrapper[4881]: I0126 14:28:25.608672 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-utilities/0.log" Jan 26 14:28:25 crc kubenswrapper[4881]: I0126 14:28:25.651030 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/extract-content/0.log" Jan 26 14:28:25 crc kubenswrapper[4881]: I0126 14:28:25.839709 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-utilities/0.log" Jan 26 14:28:26 crc kubenswrapper[4881]: I0126 14:28:26.035785 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22x8z_de3087aa-1e19-49ef-8d77-17654472881a/registry-server/0.log" Jan 26 14:28:26 crc kubenswrapper[4881]: I0126 14:28:26.036777 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-content/0.log" Jan 26 14:28:26 crc kubenswrapper[4881]: I0126 14:28:26.097060 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-utilities/0.log" Jan 26 14:28:26 crc kubenswrapper[4881]: I0126 14:28:26.110636 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-content/0.log" Jan 26 14:28:26 crc kubenswrapper[4881]: I0126 14:28:26.182142 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-utilities/0.log" Jan 26 14:28:26 crc kubenswrapper[4881]: I0126 14:28:26.265766 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/extract-content/0.log" Jan 26 14:28:26 crc kubenswrapper[4881]: I0126 14:28:26.966703 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rghg7_66cbadc1-43e8-44b8-a92b-87c37e6f895f/registry-server/0.log" Jan 26 14:28:37 crc kubenswrapper[4881]: I0126 14:28:37.083419 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:28:37 crc kubenswrapper[4881]: E0126 14:28:37.084211 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:28:40 crc kubenswrapper[4881]: I0126 14:28:40.690136 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-ss7tt_134b4f14-ab8f-4d19-9c5d-90f2642a285e/prometheus-operator/0.log" Jan 26 14:28:40 crc kubenswrapper[4881]: I0126 14:28:40.759922 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bd99787cf-9zpqq_e46bce59-2bf0-4e4f-9988-351d4f1f6bc2/prometheus-operator-admission-webhook/0.log" Jan 26 14:28:40 crc kubenswrapper[4881]: I0126 14:28:40.804238 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bd99787cf-cg7rq_0dff4c41-61ac-4189-a67e-18689e873d2a/prometheus-operator-admission-webhook/0.log" Jan 26 14:28:40 crc kubenswrapper[4881]: I0126 14:28:40.985005 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-hprwm_b23a30cc-d92b-4491-963d-9f93d3b48547/perses-operator/0.log" Jan 26 14:28:41 crc kubenswrapper[4881]: I0126 14:28:41.356398 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tf2kf_b636a0bf-808d-4fce-9675-621381943903/operator/0.log" Jan 26 14:28:50 crc kubenswrapper[4881]: I0126 14:28:50.084021 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:28:50 crc kubenswrapper[4881]: E0126 14:28:50.084838 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:29:01 crc kubenswrapper[4881]: I0126 14:29:01.083446 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:29:01 crc kubenswrapper[4881]: E0126 14:29:01.084451 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:29:14 crc kubenswrapper[4881]: I0126 14:29:14.094693 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:29:14 crc kubenswrapper[4881]: E0126 14:29:14.095918 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:29:28 crc kubenswrapper[4881]: I0126 14:29:28.105780 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:29:28 crc kubenswrapper[4881]: E0126 14:29:28.106911 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:29:41 crc kubenswrapper[4881]: I0126 14:29:41.084686 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:29:41 crc kubenswrapper[4881]: E0126 14:29:41.086882 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:29:53 crc kubenswrapper[4881]: I0126 14:29:53.082049 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:29:53 crc kubenswrapper[4881]: E0126 14:29:53.082731 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.152568 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5"] Jan 26 14:30:00 crc kubenswrapper[4881]: E0126 14:30:00.153547 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerName="extract-utilities" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.153689 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerName="extract-utilities" Jan 26 14:30:00 crc kubenswrapper[4881]: E0126 14:30:00.153713 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerName="extract-content" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.153719 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerName="extract-content" Jan 26 14:30:00 crc kubenswrapper[4881]: E0126 14:30:00.153734 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerName="registry-server" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.153741 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerName="registry-server" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.153954 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="150dbb8b-d22d-4b99-99de-8eb6e25e7ecc" containerName="registry-server" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.154670 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.156668 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.158081 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.164958 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5"] Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.331202 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjdq6\" (UniqueName: \"kubernetes.io/projected/5230b866-3f2b-4e39-a7e3-e87b050e2205-kube-api-access-vjdq6\") pod \"collect-profiles-29490630-5mzl5\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.332090 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5230b866-3f2b-4e39-a7e3-e87b050e2205-secret-volume\") pod \"collect-profiles-29490630-5mzl5\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.332137 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5230b866-3f2b-4e39-a7e3-e87b050e2205-config-volume\") pod \"collect-profiles-29490630-5mzl5\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.433912 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjdq6\" (UniqueName: \"kubernetes.io/projected/5230b866-3f2b-4e39-a7e3-e87b050e2205-kube-api-access-vjdq6\") pod \"collect-profiles-29490630-5mzl5\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.434070 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5230b866-3f2b-4e39-a7e3-e87b050e2205-secret-volume\") pod \"collect-profiles-29490630-5mzl5\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.434093 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5230b866-3f2b-4e39-a7e3-e87b050e2205-config-volume\") pod \"collect-profiles-29490630-5mzl5\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.435446 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5230b866-3f2b-4e39-a7e3-e87b050e2205-config-volume\") pod \"collect-profiles-29490630-5mzl5\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.443437 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5230b866-3f2b-4e39-a7e3-e87b050e2205-secret-volume\") pod \"collect-profiles-29490630-5mzl5\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.452162 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjdq6\" (UniqueName: \"kubernetes.io/projected/5230b866-3f2b-4e39-a7e3-e87b050e2205-kube-api-access-vjdq6\") pod \"collect-profiles-29490630-5mzl5\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.502083 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.955859 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5"] Jan 26 14:30:00 crc kubenswrapper[4881]: I0126 14:30:00.979348 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" event={"ID":"5230b866-3f2b-4e39-a7e3-e87b050e2205","Type":"ContainerStarted","Data":"8461a283ea56503b2c8544d38fbc394fbef3e86ba102868e8e6f40437ee62163"} Jan 26 14:30:01 crc kubenswrapper[4881]: I0126 14:30:01.991684 4881 generic.go:334] "Generic (PLEG): container finished" podID="5230b866-3f2b-4e39-a7e3-e87b050e2205" containerID="801078c949e5ac8adb78799aa3d809b3aac2d68fc15a054e73cb775bbd7f0461" exitCode=0 Jan 26 14:30:01 crc kubenswrapper[4881]: I0126 14:30:01.991752 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" event={"ID":"5230b866-3f2b-4e39-a7e3-e87b050e2205","Type":"ContainerDied","Data":"801078c949e5ac8adb78799aa3d809b3aac2d68fc15a054e73cb775bbd7f0461"} Jan 26 14:30:03 crc kubenswrapper[4881]: I0126 14:30:03.467905 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:03 crc kubenswrapper[4881]: I0126 14:30:03.611831 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5230b866-3f2b-4e39-a7e3-e87b050e2205-config-volume\") pod \"5230b866-3f2b-4e39-a7e3-e87b050e2205\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " Jan 26 14:30:03 crc kubenswrapper[4881]: I0126 14:30:03.611937 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjdq6\" (UniqueName: \"kubernetes.io/projected/5230b866-3f2b-4e39-a7e3-e87b050e2205-kube-api-access-vjdq6\") pod \"5230b866-3f2b-4e39-a7e3-e87b050e2205\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " Jan 26 14:30:03 crc kubenswrapper[4881]: I0126 14:30:03.612002 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5230b866-3f2b-4e39-a7e3-e87b050e2205-secret-volume\") pod \"5230b866-3f2b-4e39-a7e3-e87b050e2205\" (UID: \"5230b866-3f2b-4e39-a7e3-e87b050e2205\") " Jan 26 14:30:03 crc kubenswrapper[4881]: I0126 14:30:03.613475 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5230b866-3f2b-4e39-a7e3-e87b050e2205-config-volume" (OuterVolumeSpecName: "config-volume") pod "5230b866-3f2b-4e39-a7e3-e87b050e2205" (UID: "5230b866-3f2b-4e39-a7e3-e87b050e2205"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:03 crc kubenswrapper[4881]: I0126 14:30:03.617698 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5230b866-3f2b-4e39-a7e3-e87b050e2205-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5230b866-3f2b-4e39-a7e3-e87b050e2205" (UID: "5230b866-3f2b-4e39-a7e3-e87b050e2205"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:03 crc kubenswrapper[4881]: I0126 14:30:03.619241 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5230b866-3f2b-4e39-a7e3-e87b050e2205-kube-api-access-vjdq6" (OuterVolumeSpecName: "kube-api-access-vjdq6") pod "5230b866-3f2b-4e39-a7e3-e87b050e2205" (UID: "5230b866-3f2b-4e39-a7e3-e87b050e2205"). InnerVolumeSpecName "kube-api-access-vjdq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:03 crc kubenswrapper[4881]: I0126 14:30:03.715379 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5230b866-3f2b-4e39-a7e3-e87b050e2205-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:03 crc kubenswrapper[4881]: I0126 14:30:03.718883 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjdq6\" (UniqueName: \"kubernetes.io/projected/5230b866-3f2b-4e39-a7e3-e87b050e2205-kube-api-access-vjdq6\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:03 crc kubenswrapper[4881]: I0126 14:30:03.718913 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5230b866-3f2b-4e39-a7e3-e87b050e2205-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:04 crc kubenswrapper[4881]: I0126 14:30:04.019278 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" event={"ID":"5230b866-3f2b-4e39-a7e3-e87b050e2205","Type":"ContainerDied","Data":"8461a283ea56503b2c8544d38fbc394fbef3e86ba102868e8e6f40437ee62163"} Jan 26 14:30:04 crc kubenswrapper[4881]: I0126 14:30:04.019329 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8461a283ea56503b2c8544d38fbc394fbef3e86ba102868e8e6f40437ee62163" Jan 26 14:30:04 crc kubenswrapper[4881]: I0126 14:30:04.019367 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-5mzl5" Jan 26 14:30:04 crc kubenswrapper[4881]: I0126 14:30:04.574190 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4"] Jan 26 14:30:04 crc kubenswrapper[4881]: I0126 14:30:04.584787 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490585-hrdd4"] Jan 26 14:30:06 crc kubenswrapper[4881]: I0126 14:30:06.114491 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9479d57c-d338-4b7c-aeb1-831795cd103d" path="/var/lib/kubelet/pods/9479d57c-d338-4b7c-aeb1-831795cd103d/volumes" Jan 26 14:30:08 crc kubenswrapper[4881]: I0126 14:30:08.093364 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:30:08 crc kubenswrapper[4881]: E0126 14:30:08.095092 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:30:22 crc kubenswrapper[4881]: I0126 14:30:22.090018 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:30:22 crc kubenswrapper[4881]: E0126 14:30:22.090750 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:30:35 crc kubenswrapper[4881]: I0126 14:30:35.083164 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:30:35 crc kubenswrapper[4881]: E0126 14:30:35.084017 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:30:46 crc kubenswrapper[4881]: I0126 14:30:46.083642 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:30:46 crc kubenswrapper[4881]: E0126 14:30:46.085076 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:30:50 crc kubenswrapper[4881]: I0126 14:30:50.593428 4881 generic.go:334] "Generic (PLEG): container finished" podID="a7be9b22-84f5-4bd5-995e-86a8fe91102e" containerID="d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e" exitCode=0 Jan 26 14:30:50 crc kubenswrapper[4881]: I0126 14:30:50.593585 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztph7/must-gather-klclw" event={"ID":"a7be9b22-84f5-4bd5-995e-86a8fe91102e","Type":"ContainerDied","Data":"d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e"} Jan 26 14:30:50 crc kubenswrapper[4881]: I0126 14:30:50.595042 4881 scope.go:117] "RemoveContainer" containerID="d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e" Jan 26 14:30:51 crc kubenswrapper[4881]: I0126 14:30:51.475090 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ztph7_must-gather-klclw_a7be9b22-84f5-4bd5-995e-86a8fe91102e/gather/0.log" Jan 26 14:30:58 crc kubenswrapper[4881]: I0126 14:30:58.152717 4881 scope.go:117] "RemoveContainer" containerID="27bed579c2cfdc8541d6dfcbeaba9b1aa55176269933d5c1ad6ba4f2c9c8ae16" Jan 26 14:31:00 crc kubenswrapper[4881]: I0126 14:31:00.083766 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:31:00 crc kubenswrapper[4881]: E0126 14:31:00.084637 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:31:02 crc kubenswrapper[4881]: I0126 14:31:02.791432 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5bgc2"] Jan 26 14:31:02 crc kubenswrapper[4881]: E0126 14:31:02.791849 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5230b866-3f2b-4e39-a7e3-e87b050e2205" containerName="collect-profiles" Jan 26 14:31:02 crc kubenswrapper[4881]: I0126 14:31:02.791862 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5230b866-3f2b-4e39-a7e3-e87b050e2205" containerName="collect-profiles" Jan 26 14:31:02 crc kubenswrapper[4881]: I0126 14:31:02.792052 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5230b866-3f2b-4e39-a7e3-e87b050e2205" containerName="collect-profiles" Jan 26 14:31:02 crc kubenswrapper[4881]: I0126 14:31:02.793379 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:02 crc kubenswrapper[4881]: I0126 14:31:02.808245 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bgc2"] Jan 26 14:31:02 crc kubenswrapper[4881]: I0126 14:31:02.905682 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zx4\" (UniqueName: \"kubernetes.io/projected/400ded3c-325c-4ec2-840a-c28e0d38fd9b-kube-api-access-22zx4\") pod \"community-operators-5bgc2\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:02 crc kubenswrapper[4881]: I0126 14:31:02.905794 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-utilities\") pod \"community-operators-5bgc2\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:02 crc kubenswrapper[4881]: I0126 14:31:02.906202 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-catalog-content\") pod \"community-operators-5bgc2\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:03 crc kubenswrapper[4881]: I0126 14:31:03.007815 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-catalog-content\") pod \"community-operators-5bgc2\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:03 crc kubenswrapper[4881]: I0126 14:31:03.008140 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zx4\" (UniqueName: \"kubernetes.io/projected/400ded3c-325c-4ec2-840a-c28e0d38fd9b-kube-api-access-22zx4\") pod \"community-operators-5bgc2\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:03 crc kubenswrapper[4881]: I0126 14:31:03.008206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-utilities\") pod \"community-operators-5bgc2\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:03 crc kubenswrapper[4881]: I0126 14:31:03.008581 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-utilities\") pod \"community-operators-5bgc2\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:03 crc kubenswrapper[4881]: I0126 14:31:03.008728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-catalog-content\") pod \"community-operators-5bgc2\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:03 crc kubenswrapper[4881]: I0126 14:31:03.027409 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zx4\" (UniqueName: \"kubernetes.io/projected/400ded3c-325c-4ec2-840a-c28e0d38fd9b-kube-api-access-22zx4\") pod \"community-operators-5bgc2\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:03 crc kubenswrapper[4881]: I0126 14:31:03.121022 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:03 crc kubenswrapper[4881]: I0126 14:31:03.672927 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bgc2"] Jan 26 14:31:03 crc kubenswrapper[4881]: I0126 14:31:03.739272 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bgc2" event={"ID":"400ded3c-325c-4ec2-840a-c28e0d38fd9b","Type":"ContainerStarted","Data":"0eb90e769fcd649a1a1874a432e6b5f71eb74d245a6ed61d80f73db5c003da44"} Jan 26 14:31:04 crc kubenswrapper[4881]: I0126 14:31:04.750355 4881 generic.go:334] "Generic (PLEG): container finished" podID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerID="7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb" exitCode=0 Jan 26 14:31:04 crc kubenswrapper[4881]: I0126 14:31:04.750410 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bgc2" event={"ID":"400ded3c-325c-4ec2-840a-c28e0d38fd9b","Type":"ContainerDied","Data":"7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb"} Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.020105 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ztph7/must-gather-klclw"] Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.021041 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ztph7/must-gather-klclw" podUID="a7be9b22-84f5-4bd5-995e-86a8fe91102e" containerName="copy" containerID="cri-o://e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a" gracePeriod=2 Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.032104 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ztph7/must-gather-klclw"] Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.483814 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ztph7_must-gather-klclw_a7be9b22-84f5-4bd5-995e-86a8fe91102e/copy/0.log" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.484549 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/must-gather-klclw" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.666956 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4k29\" (UniqueName: \"kubernetes.io/projected/a7be9b22-84f5-4bd5-995e-86a8fe91102e-kube-api-access-s4k29\") pod \"a7be9b22-84f5-4bd5-995e-86a8fe91102e\" (UID: \"a7be9b22-84f5-4bd5-995e-86a8fe91102e\") " Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.667055 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a7be9b22-84f5-4bd5-995e-86a8fe91102e-must-gather-output\") pod \"a7be9b22-84f5-4bd5-995e-86a8fe91102e\" (UID: \"a7be9b22-84f5-4bd5-995e-86a8fe91102e\") " Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.676742 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7be9b22-84f5-4bd5-995e-86a8fe91102e-kube-api-access-s4k29" (OuterVolumeSpecName: "kube-api-access-s4k29") pod "a7be9b22-84f5-4bd5-995e-86a8fe91102e" (UID: "a7be9b22-84f5-4bd5-995e-86a8fe91102e"). InnerVolumeSpecName "kube-api-access-s4k29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.763098 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bgc2" event={"ID":"400ded3c-325c-4ec2-840a-c28e0d38fd9b","Type":"ContainerStarted","Data":"81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303"} Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.765348 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ztph7_must-gather-klclw_a7be9b22-84f5-4bd5-995e-86a8fe91102e/copy/0.log" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.765691 4881 generic.go:334] "Generic (PLEG): container finished" podID="a7be9b22-84f5-4bd5-995e-86a8fe91102e" containerID="e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a" exitCode=143 Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.765763 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztph7/must-gather-klclw" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.765766 4881 scope.go:117] "RemoveContainer" containerID="e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.769994 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4k29\" (UniqueName: \"kubernetes.io/projected/a7be9b22-84f5-4bd5-995e-86a8fe91102e-kube-api-access-s4k29\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.800451 4881 scope.go:117] "RemoveContainer" containerID="d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.858860 4881 scope.go:117] "RemoveContainer" containerID="e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a" Jan 26 14:31:05 crc kubenswrapper[4881]: E0126 14:31:05.859262 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a\": container with ID starting with e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a not found: ID does not exist" containerID="e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.859291 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a"} err="failed to get container status \"e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a\": rpc error: code = NotFound desc = could not find container \"e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a\": container with ID starting with e51bc052ac657abf937176145259151a962ffdfe328846361c5bab354aedfa5a not found: ID does not exist" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.859312 4881 scope.go:117] "RemoveContainer" containerID="d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e" Jan 26 14:31:05 crc kubenswrapper[4881]: E0126 14:31:05.859707 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e\": container with ID starting with d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e not found: ID does not exist" containerID="d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.859729 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e"} err="failed to get container status \"d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e\": rpc error: code = NotFound desc = could not find container \"d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e\": container with ID starting with d129c1aab282e9dd6c428331fcaaf25ce43b1b7e02647e9e7ed2a1b437617f1e not found: ID does not exist" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.887693 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7be9b22-84f5-4bd5-995e-86a8fe91102e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a7be9b22-84f5-4bd5-995e-86a8fe91102e" (UID: "a7be9b22-84f5-4bd5-995e-86a8fe91102e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:31:05 crc kubenswrapper[4881]: I0126 14:31:05.987165 4881 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a7be9b22-84f5-4bd5-995e-86a8fe91102e-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:06 crc kubenswrapper[4881]: I0126 14:31:06.107194 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7be9b22-84f5-4bd5-995e-86a8fe91102e" path="/var/lib/kubelet/pods/a7be9b22-84f5-4bd5-995e-86a8fe91102e/volumes" Jan 26 14:31:06 crc kubenswrapper[4881]: I0126 14:31:06.779152 4881 generic.go:334] "Generic (PLEG): container finished" podID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerID="81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303" exitCode=0 Jan 26 14:31:06 crc kubenswrapper[4881]: I0126 14:31:06.779194 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bgc2" event={"ID":"400ded3c-325c-4ec2-840a-c28e0d38fd9b","Type":"ContainerDied","Data":"81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303"} Jan 26 14:31:06 crc kubenswrapper[4881]: I0126 14:31:06.781437 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:31:08 crc kubenswrapper[4881]: I0126 14:31:08.803614 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bgc2" event={"ID":"400ded3c-325c-4ec2-840a-c28e0d38fd9b","Type":"ContainerStarted","Data":"309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac"} Jan 26 14:31:08 crc kubenswrapper[4881]: I0126 14:31:08.830241 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5bgc2" podStartSLOduration=4.262017122 podStartE2EDuration="6.830214893s" podCreationTimestamp="2026-01-26 14:31:02 +0000 UTC" firstStartedPulling="2026-01-26 14:31:04.752970036 +0000 UTC m=+6937.232280062" lastFinishedPulling="2026-01-26 14:31:07.321167777 +0000 UTC m=+6939.800477833" observedRunningTime="2026-01-26 14:31:08.827937928 +0000 UTC m=+6941.307247954" watchObservedRunningTime="2026-01-26 14:31:08.830214893 +0000 UTC m=+6941.309524919" Jan 26 14:31:11 crc kubenswrapper[4881]: I0126 14:31:11.082687 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:31:11 crc kubenswrapper[4881]: E0126 14:31:11.083305 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:31:13 crc kubenswrapper[4881]: I0126 14:31:13.122030 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:13 crc kubenswrapper[4881]: I0126 14:31:13.122379 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:13 crc kubenswrapper[4881]: I0126 14:31:13.198165 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:13 crc kubenswrapper[4881]: I0126 14:31:13.921450 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:13 crc kubenswrapper[4881]: I0126 14:31:13.975491 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bgc2"] Jan 26 14:31:15 crc kubenswrapper[4881]: I0126 14:31:15.906211 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5bgc2" podUID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerName="registry-server" containerID="cri-o://309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac" gracePeriod=2 Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.601054 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.654899 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22zx4\" (UniqueName: \"kubernetes.io/projected/400ded3c-325c-4ec2-840a-c28e0d38fd9b-kube-api-access-22zx4\") pod \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.654975 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-catalog-content\") pod \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.655030 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-utilities\") pod \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\" (UID: \"400ded3c-325c-4ec2-840a-c28e0d38fd9b\") " Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.655818 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-utilities" (OuterVolumeSpecName: "utilities") pod "400ded3c-325c-4ec2-840a-c28e0d38fd9b" (UID: "400ded3c-325c-4ec2-840a-c28e0d38fd9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.677351 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/400ded3c-325c-4ec2-840a-c28e0d38fd9b-kube-api-access-22zx4" (OuterVolumeSpecName: "kube-api-access-22zx4") pod "400ded3c-325c-4ec2-840a-c28e0d38fd9b" (UID: "400ded3c-325c-4ec2-840a-c28e0d38fd9b"). InnerVolumeSpecName "kube-api-access-22zx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.731119 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "400ded3c-325c-4ec2-840a-c28e0d38fd9b" (UID: "400ded3c-325c-4ec2-840a-c28e0d38fd9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.757438 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22zx4\" (UniqueName: \"kubernetes.io/projected/400ded3c-325c-4ec2-840a-c28e0d38fd9b-kube-api-access-22zx4\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.757484 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.757498 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/400ded3c-325c-4ec2-840a-c28e0d38fd9b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.919020 4881 generic.go:334] "Generic (PLEG): container finished" podID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerID="309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac" exitCode=0 Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.919089 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bgc2" event={"ID":"400ded3c-325c-4ec2-840a-c28e0d38fd9b","Type":"ContainerDied","Data":"309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac"} Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.919132 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bgc2" event={"ID":"400ded3c-325c-4ec2-840a-c28e0d38fd9b","Type":"ContainerDied","Data":"0eb90e769fcd649a1a1874a432e6b5f71eb74d245a6ed61d80f73db5c003da44"} Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.919162 4881 scope.go:117] "RemoveContainer" containerID="309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac" Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.919373 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bgc2" Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.942696 4881 scope.go:117] "RemoveContainer" containerID="81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303" Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.959358 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bgc2"] Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.970315 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5bgc2"] Jan 26 14:31:16 crc kubenswrapper[4881]: I0126 14:31:16.976839 4881 scope.go:117] "RemoveContainer" containerID="7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb" Jan 26 14:31:17 crc kubenswrapper[4881]: I0126 14:31:17.033920 4881 scope.go:117] "RemoveContainer" containerID="309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac" Jan 26 14:31:17 crc kubenswrapper[4881]: E0126 14:31:17.034268 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac\": container with ID starting with 309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac not found: ID does not exist" containerID="309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac" Jan 26 14:31:17 crc kubenswrapper[4881]: I0126 14:31:17.034297 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac"} err="failed to get container status \"309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac\": rpc error: code = NotFound desc = could not find container \"309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac\": container with ID starting with 309c020d0d08e71b11a02e3a338da7a5796ff1a14b6a295998228aafa5f6a2ac not found: ID does not exist" Jan 26 14:31:17 crc kubenswrapper[4881]: I0126 14:31:17.034316 4881 scope.go:117] "RemoveContainer" containerID="81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303" Jan 26 14:31:17 crc kubenswrapper[4881]: E0126 14:31:17.034887 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303\": container with ID starting with 81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303 not found: ID does not exist" containerID="81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303" Jan 26 14:31:17 crc kubenswrapper[4881]: I0126 14:31:17.034909 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303"} err="failed to get container status \"81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303\": rpc error: code = NotFound desc = could not find container \"81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303\": container with ID starting with 81c58d4fc3aa0bebb6eee64cea6330b882d7f477f786abbf5d7f73c67a42f303 not found: ID does not exist" Jan 26 14:31:17 crc kubenswrapper[4881]: I0126 14:31:17.034922 4881 scope.go:117] "RemoveContainer" containerID="7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb" Jan 26 14:31:17 crc kubenswrapper[4881]: E0126 14:31:17.035347 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb\": container with ID starting with 7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb not found: ID does not exist" containerID="7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb" Jan 26 14:31:17 crc kubenswrapper[4881]: I0126 14:31:17.035374 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb"} err="failed to get container status \"7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb\": rpc error: code = NotFound desc = could not find container \"7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb\": container with ID starting with 7fb0d18fd51805250cc0e17e30cc91652393eecee9dd3277a42fdb1a7e3cc0cb not found: ID does not exist" Jan 26 14:31:18 crc kubenswrapper[4881]: I0126 14:31:18.096231 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" path="/var/lib/kubelet/pods/400ded3c-325c-4ec2-840a-c28e0d38fd9b/volumes" Jan 26 14:31:26 crc kubenswrapper[4881]: I0126 14:31:26.082738 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:31:26 crc kubenswrapper[4881]: E0126 14:31:26.083564 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:31:42 crc kubenswrapper[4881]: I0126 14:31:42.031912 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:31:42 crc kubenswrapper[4881]: E0126 14:31:42.032916 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:31:56 crc kubenswrapper[4881]: I0126 14:31:56.083698 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:31:56 crc kubenswrapper[4881]: E0126 14:31:56.084968 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:31:58 crc kubenswrapper[4881]: I0126 14:31:58.225141 4881 scope.go:117] "RemoveContainer" containerID="aa44870ffcf89f96ceded700e9a58d59d40da44b4682c00111c21ca5d1d6516d" Jan 26 14:32:07 crc kubenswrapper[4881]: I0126 14:32:07.082314 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:32:07 crc kubenswrapper[4881]: E0126 14:32:07.082970 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.750784 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hzbgz"] Jan 26 14:32:20 crc kubenswrapper[4881]: E0126 14:32:20.751802 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerName="extract-utilities" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.751817 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerName="extract-utilities" Jan 26 14:32:20 crc kubenswrapper[4881]: E0126 14:32:20.751834 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7be9b22-84f5-4bd5-995e-86a8fe91102e" containerName="gather" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.751842 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7be9b22-84f5-4bd5-995e-86a8fe91102e" containerName="gather" Jan 26 14:32:20 crc kubenswrapper[4881]: E0126 14:32:20.751876 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerName="registry-server" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.751884 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerName="registry-server" Jan 26 14:32:20 crc kubenswrapper[4881]: E0126 14:32:20.751904 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerName="extract-content" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.751912 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerName="extract-content" Jan 26 14:32:20 crc kubenswrapper[4881]: E0126 14:32:20.751921 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7be9b22-84f5-4bd5-995e-86a8fe91102e" containerName="copy" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.751929 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7be9b22-84f5-4bd5-995e-86a8fe91102e" containerName="copy" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.752194 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7be9b22-84f5-4bd5-995e-86a8fe91102e" containerName="gather" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.752220 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7be9b22-84f5-4bd5-995e-86a8fe91102e" containerName="copy" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.752238 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="400ded3c-325c-4ec2-840a-c28e0d38fd9b" containerName="registry-server" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.753954 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.761853 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzbgz"] Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.886417 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-utilities\") pod \"certified-operators-hzbgz\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.886612 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlf6k\" (UniqueName: \"kubernetes.io/projected/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-kube-api-access-nlf6k\") pod \"certified-operators-hzbgz\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.886665 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-catalog-content\") pod \"certified-operators-hzbgz\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.988409 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-utilities\") pod \"certified-operators-hzbgz\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.988547 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlf6k\" (UniqueName: \"kubernetes.io/projected/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-kube-api-access-nlf6k\") pod \"certified-operators-hzbgz\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.988578 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-catalog-content\") pod \"certified-operators-hzbgz\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.989065 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-catalog-content\") pod \"certified-operators-hzbgz\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:20 crc kubenswrapper[4881]: I0126 14:32:20.989905 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-utilities\") pod \"certified-operators-hzbgz\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:21 crc kubenswrapper[4881]: I0126 14:32:21.013223 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlf6k\" (UniqueName: \"kubernetes.io/projected/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-kube-api-access-nlf6k\") pod \"certified-operators-hzbgz\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:21 crc kubenswrapper[4881]: I0126 14:32:21.108328 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:21 crc kubenswrapper[4881]: I0126 14:32:21.699041 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzbgz"] Jan 26 14:32:22 crc kubenswrapper[4881]: I0126 14:32:22.083075 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:32:22 crc kubenswrapper[4881]: E0126 14:32:22.083393 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:32:22 crc kubenswrapper[4881]: I0126 14:32:22.503986 4881 generic.go:334] "Generic (PLEG): container finished" podID="d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d" containerID="4952668f6560a6f6e2110c8ed14c86a5098281c81c22863861a48f66ba5823de" exitCode=0 Jan 26 14:32:22 crc kubenswrapper[4881]: I0126 14:32:22.504320 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzbgz" event={"ID":"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d","Type":"ContainerDied","Data":"4952668f6560a6f6e2110c8ed14c86a5098281c81c22863861a48f66ba5823de"} Jan 26 14:32:22 crc kubenswrapper[4881]: I0126 14:32:22.504351 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzbgz" event={"ID":"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d","Type":"ContainerStarted","Data":"e49fac98ba4eae4c4da7c018075a78342bb0ee9475c7760bbeb4b63f074a21e5"} Jan 26 14:32:23 crc kubenswrapper[4881]: I0126 14:32:23.518780 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzbgz" event={"ID":"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d","Type":"ContainerStarted","Data":"8fb22e93748b1d96c79cdae5f8c4e958c35ab4621d06d0f6ad221eba1cb3dedb"} Jan 26 14:32:25 crc kubenswrapper[4881]: I0126 14:32:25.549612 4881 generic.go:334] "Generic (PLEG): container finished" podID="d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d" containerID="8fb22e93748b1d96c79cdae5f8c4e958c35ab4621d06d0f6ad221eba1cb3dedb" exitCode=0 Jan 26 14:32:25 crc kubenswrapper[4881]: I0126 14:32:25.549764 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzbgz" event={"ID":"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d","Type":"ContainerDied","Data":"8fb22e93748b1d96c79cdae5f8c4e958c35ab4621d06d0f6ad221eba1cb3dedb"} Jan 26 14:32:27 crc kubenswrapper[4881]: I0126 14:32:27.571285 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzbgz" event={"ID":"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d","Type":"ContainerStarted","Data":"9bfecab9baf239cd7ecb68a848215c673f16997d3f1170de844fe041248b5432"} Jan 26 14:32:27 crc kubenswrapper[4881]: I0126 14:32:27.601711 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hzbgz" podStartSLOduration=2.887593013 podStartE2EDuration="7.601693007s" podCreationTimestamp="2026-01-26 14:32:20 +0000 UTC" firstStartedPulling="2026-01-26 14:32:22.507643775 +0000 UTC m=+7014.986953801" lastFinishedPulling="2026-01-26 14:32:27.221743759 +0000 UTC m=+7019.701053795" observedRunningTime="2026-01-26 14:32:27.596555985 +0000 UTC m=+7020.075866031" watchObservedRunningTime="2026-01-26 14:32:27.601693007 +0000 UTC m=+7020.081003033" Jan 26 14:32:31 crc kubenswrapper[4881]: I0126 14:32:31.108622 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:31 crc kubenswrapper[4881]: I0126 14:32:31.109645 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:31 crc kubenswrapper[4881]: I0126 14:32:31.172271 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:36 crc kubenswrapper[4881]: I0126 14:32:36.084796 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:32:36 crc kubenswrapper[4881]: E0126 14:32:36.085726 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:32:41 crc kubenswrapper[4881]: I0126 14:32:41.200017 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:41 crc kubenswrapper[4881]: I0126 14:32:41.302874 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzbgz"] Jan 26 14:32:42 crc kubenswrapper[4881]: I0126 14:32:42.141804 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hzbgz" podUID="d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d" containerName="registry-server" containerID="cri-o://9bfecab9baf239cd7ecb68a848215c673f16997d3f1170de844fe041248b5432" gracePeriod=2 Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.153633 4881 generic.go:334] "Generic (PLEG): container finished" podID="d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d" containerID="9bfecab9baf239cd7ecb68a848215c673f16997d3f1170de844fe041248b5432" exitCode=0 Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.153699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzbgz" event={"ID":"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d","Type":"ContainerDied","Data":"9bfecab9baf239cd7ecb68a848215c673f16997d3f1170de844fe041248b5432"} Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.154055 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzbgz" event={"ID":"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d","Type":"ContainerDied","Data":"e49fac98ba4eae4c4da7c018075a78342bb0ee9475c7760bbeb4b63f074a21e5"} Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.154093 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e49fac98ba4eae4c4da7c018075a78342bb0ee9475c7760bbeb4b63f074a21e5" Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.160790 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.236746 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-catalog-content\") pod \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.236929 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-utilities\") pod \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.237125 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlf6k\" (UniqueName: \"kubernetes.io/projected/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-kube-api-access-nlf6k\") pod \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\" (UID: \"d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d\") " Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.237845 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-utilities" (OuterVolumeSpecName: "utilities") pod "d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d" (UID: "d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.250732 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-kube-api-access-nlf6k" (OuterVolumeSpecName: "kube-api-access-nlf6k") pod "d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d" (UID: "d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d"). InnerVolumeSpecName "kube-api-access-nlf6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.283403 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d" (UID: "d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.339192 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.339228 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlf6k\" (UniqueName: \"kubernetes.io/projected/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-kube-api-access-nlf6k\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:43 crc kubenswrapper[4881]: I0126 14:32:43.339239 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:44 crc kubenswrapper[4881]: I0126 14:32:44.166988 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzbgz" Jan 26 14:32:44 crc kubenswrapper[4881]: I0126 14:32:44.206488 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzbgz"] Jan 26 14:32:44 crc kubenswrapper[4881]: I0126 14:32:44.223443 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hzbgz"] Jan 26 14:32:46 crc kubenswrapper[4881]: I0126 14:32:46.099314 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d" path="/var/lib/kubelet/pods/d04dccde-94d3-44a2-acb0-f8ce1b2f9a9d/volumes" Jan 26 14:32:51 crc kubenswrapper[4881]: I0126 14:32:51.082754 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:32:51 crc kubenswrapper[4881]: E0126 14:32:51.083572 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fwlbz_openshift-machine-config-operator(ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19)\"" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" podUID="ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19" Jan 26 14:32:58 crc kubenswrapper[4881]: I0126 14:32:58.333344 4881 scope.go:117] "RemoveContainer" containerID="f7f5e126d13e1d6ba2b2b3eac49a03c5411182829ea3293c12d987dafc7c30a8" Jan 26 14:32:58 crc kubenswrapper[4881]: I0126 14:32:58.379954 4881 scope.go:117] "RemoveContainer" containerID="f3def26d49d8bef5ad27322a74a192e220c369f9b8a833d59feabd5f2da87b7d" Jan 26 14:32:58 crc kubenswrapper[4881]: I0126 14:32:58.417819 4881 scope.go:117] "RemoveContainer" containerID="5db0368e090e7cd83da6e8db3dfbca95586c645952a5fd9d3ff192cf33102ea3" Jan 26 14:33:03 crc kubenswrapper[4881]: I0126 14:33:03.082623 4881 scope.go:117] "RemoveContainer" containerID="3576d9ae2a0c939f74ed0088e4b53843689b0037e8a526f6fd82ad39b8913062" Jan 26 14:33:03 crc kubenswrapper[4881]: I0126 14:33:03.376784 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fwlbz" event={"ID":"ebe5a5ae-b195-4f35-bcee-d9fe6d27dd19","Type":"ContainerStarted","Data":"35240d7fa78cf8e0164187b8a4efb14f0fb9746699f7bacc1712cb4929cba015"} Jan 26 14:33:55 crc kubenswrapper[4881]: I0126 14:33:55.263771 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" podUID="1a51e914-e793-4f03-b58a-65628089e71a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:33:55 crc kubenswrapper[4881]: I0126 14:33:55.263808 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-64dc64df49-qlh66" podUID="1a51e914-e793-4f03-b58a-65628089e71a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"